PLearn 0.1
|
Implements a gaussian-based output layer for the Neural Network Language Model. More...
#include <NnlmOutputLayer.h>
Public Types | |
enum | { COST_DISCR = 0, COST_APPROX_DISCR = 1, COST_NON_DISCR = 2 } |
enum | { LEARNING_DISCRIMINANT = 0, LEARNING_EMPIRICAL = 1 } |
Public Member Functions | |
NnlmOutputLayer () | |
Default constructor. | |
void | resetParameters () |
Resizes variables and sets pretty much everything back to a 'zero' value. | |
void | resetAllClassVars () |
Used for initializing s_sumI, sumI, sumR, sumR2, as well as pi, mu and sigma2 to the max likelihood values. | |
void | updateClassVars (const int the_target, const Vec &the_input) |
void | applyAllClassVars () |
void | computeEmpiricalLearningRateParameters () |
Computes the word specific empirical learning rates MUST be called after a valid call to applyClassCounts() | |
void | setTarget (int the_target) const |
Sets t, the target. | |
void | setContext (int the_context) const |
Sets the context. The Candidates set of the approximated discriminant cost is determined from the context. | |
void | setCost (int the_cost) |
Sets the cost used in the fprop() | |
void | setLearning (int the_learning) |
virtual void | fprop (const Vec &input, Vec &output) const |
Computes 'cost' for the 'target' given the input, compute the output (possibly resize it appropriately) | |
virtual void | bpropUpdate (const Vec &input, const Vec &output, Vec &input_gradient, const Vec &output_gradient) |
Adapt based on the output gradient: this method should only be called just after a corresponding fprop; it should be called with the same arguments as fprop for the first two arguments (and output should not have been modified since then). | |
virtual void | forget () |
reset the parameters to the state they would be BEFORE starting training. | |
void | compute_nl_p_rt (const Vec &input, Vec &output) const |
optionally perform some processing after training, or after a series of fprop/bpropUpdate calls to prepare the model for truly out-of-sample operation. | |
void | compute_approx_nl_p_t_r (const Vec &input, Vec &output) const |
Computes the approximation -log( p(t|r) ) using only some candidates for normalization. | |
void | compute_nl_p_t_r (const Vec &input, Vec &output) const |
Computes -log( p(t|r) ) | |
void | getBestCandidates (const Vec &input, Vec &candidate_tags, Vec &probabilities) const |
returns best candidates according to compute_nl_p_t_r | |
void | computeNonDiscriminantGradient () const |
Gradients with respect to input. | |
void | computeApproxDiscriminantGradient () const |
MUST be called after the corresponding fprop. | |
void | computeDiscriminantGradient () const |
MUST be called after the corresponding fprop. | |
void | addCandidateContribution (int c) const |
void | applyMuAndSigmaEmpiricalUpdate (const Vec &input) const |
mu and sigma updates | |
void | applyMuGradient () const |
Compute and apply gradients of different costs with respect to mus. | |
void | applyMuTargetGradient () const |
MUST be called after the corresponding fprop. | |
void | applyMuCandidateGradient (int c) const |
MUST be called after the corresponding fprop. | |
void | applySigmaGradient () const |
Compute and apply gradients of different costs with respect to sigmas. | |
void | applySigmaTargetGradient () const |
void | applySigmaCandidateGradient (int c) const |
virtual string | classname () const |
virtual OptionList & | getOptionList () const |
virtual OptionMap & | getOptionMap () const |
virtual RemoteMethodMap & | getRemoteMethodMap () const |
virtual NnlmOutputLayer * | deepCopy (CopiesMap &copies) const |
virtual void | build () |
Post-constructor. | |
virtual void | makeDeepCopyFromShallowCopy (CopiesMap &copies) |
Transforms a shallow copy into a deep copy. | |
Static Public Member Functions | |
static string | _classname_ () |
static OptionList & | _getOptionList_ () |
static RemoteMethodMap & | _getRemoteMethodMap_ () |
static Object * | _new_instance_for_typemap_ () |
static bool | _isa_ (const Object *o) |
static void | _static_initialize_ () |
static const PPath & | declaringFile () |
Public Attributes | |
int | target_cardinality |
specifies the range of the values of 'target' | |
int | context_cardinality |
specifies the range of the values of 'context' (ex: + 'missing' tag) | |
real | sigma2min |
minimal value ![]() | |
real | dl_start_learning_rate |
Discriminant learning (of ![]() ![]() | |
real | dl_decrease_constant |
real | el_start_discount_factor |
Empirical learning (of ![]() ![]() | |
int | step_number |
keeps track of updates | |
real | umc |
We use a mixture with a uniform to prevent negligeable probabilities which cause gradient explosions. | |
Vec | pi |
pi(i) = empirical mean when of c==i, ie p(c) | |
Mat | mu |
Gaussian parameters - p_g(r|c) | |
Mat | sigma2 |
Vec | global_mu |
Vec | global_sigma2 |
int | s_sumI |
EMPIRICAL LEARNING Intermediaries. | |
TVec< int > | sumI |
Mat | sumR |
Mat | sumR2 |
Vec | global_sumR |
Vec | global_sumR2 |
TVec< int > | shared_candidates |
Holds candidates. | |
TVec< TVec< int > > | candidates |
int | learning |
int | cost |
Must be set before calling fprop. | |
int | target |
the current word -> we use its parameters to compute output | |
int | the_real_target |
int | context |
real | s |
real | g_exponent |
real | log_g_det_covariance |
real | log_g_normalization |
Vec | vec_log_p_rg_t |
Vec | vec_log_p_r_t |
Vec | vec_log_p_rt |
real | log_sum_p_ru |
Mat | beta |
Vec | nd_gradient |
Vec | ad_gradient |
Vec | fd_gradient |
Vec | bill |
Vec | bob |
Vec | gradient_log_tmp |
Vec | gradient_log_tmp_pos |
Vec | gradient_log_tmp_neg |
Vec | el_start_learning_rate |
Vec | el_decrease_constant |
Vec | el_last_update |
real | el_dr |
The original way of computing the mus and sigmas (ex. | |
real | dl_lr |
bool | is_learning |
Static Public Attributes | |
static StaticInitializer | _static_initializer_ |
Static Protected Member Functions | |
static void | declareOptions (OptionList &ol) |
Declares the class options. | |
Private Types | |
typedef OnlineLearningModule | inherited |
Private Member Functions | |
void | build_ () |
This does the actual building. |
Implements a gaussian-based output layer for the Neural Network Language Model.
Given 'r' the output of the previous layer (the representation of the input), and 't' the target class, this module models p(r|t) as a mixture between a gaussian model and a uniform: p(r|t) = umc * p_g(r|t) + (1-umc) p_u(r|t). We have p_u(r|c) = 1.0 / 2^input_size
The output is then computed from p(r,t) = p(r|t) * p(t):
Learning of and can be:
Definition at line 70 of file NnlmOutputLayer.h.
typedef OnlineLearningModule PLearn::NnlmOutputLayer::inherited [private] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 72 of file NnlmOutputLayer.h.
anonymous enum |
Definition at line 271 of file NnlmOutputLayer.h.
{COST_DISCR=0, COST_APPROX_DISCR=1, COST_NON_DISCR=2}; // ### Watchout... also defined in NnlmOnlineLearner.
anonymous enum |
Definition at line 272 of file NnlmOutputLayer.h.
{LEARNING_DISCRIMINANT=0, LEARNING_EMPIRICAL=1}; // Granted, this is not good.
PLearn::NnlmOutputLayer::NnlmOutputLayer | ( | ) |
Default constructor.
Definition at line 70 of file NnlmOutputLayer.cc.
: OnlineLearningModule(), target_cardinality( -1 ), context_cardinality( -1 ), sigma2min( 0.001 ), // ### VERY IMPORTANT!!! dl_start_learning_rate( 0.0 ), dl_decrease_constant( 0.0 ), el_start_discount_factor( 0.01 ), // ### VERY IMPORTANT!!! step_number( 0 ), umc( 0.999999 ), // ### learning( LEARNING_DISCRIMINANT ), cost( COST_DISCR ), target( -1 ), the_real_target( -1 ), context( -1 ), s( 0.0 ), g_exponent( 0.0 ), log_g_det_covariance( -REAL_MAX ), log_g_normalization( -REAL_MAX ), log_sum_p_ru( -REAL_MAX ), is_learning( false ) { // ### You may (or not) want to call build_() to finish building the object // ### (doing so assumes the parent classes' build_() have been called too // ### in the parent classes' constructors, something that you must ensure) }
string PLearn::NnlmOutputLayer::_classname_ | ( | ) | [static] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 49 of file NnlmOutputLayer.cc.
{
OptionList & PLearn::NnlmOutputLayer::_getOptionList_ | ( | ) | [static] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 49 of file NnlmOutputLayer.cc.
{
RemoteMethodMap & PLearn::NnlmOutputLayer::_getRemoteMethodMap_ | ( | ) | [static] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 49 of file NnlmOutputLayer.cc.
{
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 49 of file NnlmOutputLayer.cc.
{
Object * PLearn::NnlmOutputLayer::_new_instance_for_typemap_ | ( | ) | [static] |
StaticInitializer NnlmOutputLayer::_static_initializer_ & PLearn::NnlmOutputLayer::_static_initialize_ | ( | ) | [static] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 49 of file NnlmOutputLayer.cc.
{
void PLearn::NnlmOutputLayer::addCandidateContribution | ( | int | c | ) | const |
Definition at line 870 of file NnlmOutputLayer.cc.
References beta, gradient_log_tmp_neg, gradient_log_tmp_pos, i, PLearn::OnlineLearningModule::input_size, PLearn::logadd(), pi, PLERROR, PLearn::safelog(), and vec_log_p_rg_t.
Referenced by computeApproxDiscriminantGradient(), and computeDiscriminantGradient().
{ for(int i=0; i<input_size; i++) { if( beta(c,i) > 0) { gradient_log_tmp_pos[i] = logadd( gradient_log_tmp_pos[i], vec_log_p_rg_t[c] + safelog( beta(c,i) ) + safelog( pi[c] ) ); } else { gradient_log_tmp_neg[i] = logadd( gradient_log_tmp_neg[i], vec_log_p_rg_t[c] + safelog( -beta(c,i) ) + safelog( pi[c] ) ); } #ifdef BOUNDCHECK if( isnan(gradient_log_tmp_pos[i]) || isnan(gradient_log_tmp_neg[i]) ) { PLERROR("NnlmOutputLayer::computeApproxDiscriminantGradient - gradient_log_tmp_pos or gradient_log_tmp_neg is NAN.\n"); } #endif } }
void PLearn::NnlmOutputLayer::applyAllClassVars | ( | ) |
Definition at line 356 of file NnlmOutputLayer.cc.
References PLearn::endl(), global_mu, global_sigma2, global_sumR, global_sumR2, i, PLearn::OnlineLearningModule::input_size, mu, pi, PLERROR, s_sumI, sigma2, sigma2min, sumI, sumR, and target_cardinality.
{ // ### global values for(int i=0; i<input_size; i++) { global_mu[i] = global_sumR[i] / (real) s_sumI; // Diviser par (n-1) au lieu de n global_sigma2[i] = ( (real) s_sumI * global_mu[i] * global_mu[i] + global_sumR2[i] - 2.0 * global_mu[i] * global_sumR[i] ) / (s_sumI - 1); if(global_sigma2[i]<sigma2min) { cout << "NnlmOutputLayer::applyAllClassVars() -> global_sigma2[i]<sigma2min" << endl; global_sigma2[i] = sigma2min; } } // for input_size // ### global values for( int t=0; t<target_cardinality; t++ ) { #ifdef BOUNDCHECK if( sumI[ t ] <= 1 ) { PLERROR("NnlmOutputLayer::applyAllClassVars - sumI[ %i ] <= 1\n", t); } #endif for(int i=0; i<input_size; i++) { pi[t] = (real) sumI[ t ] / s_sumI; mu( t, i ) = sumR( t, i ) / (real) sumI[ t ]; // ### global values /* // Diviser par (n-1) au lieu de n sigma2( t, i ) = ( sumI[ t ] * mu(t, i) * mu(t, i) + sumR2(t, i) - 2.0 * mu(t, i) * sumR(t, i) ) / (sumI[ t ] - 1); if(sigma2( t, i )<sigma2min) { //cout << "***" << t << "***" << sumI[ t ] << " sur " << s_sumI << endl; //cout << "NnlmOutputLayer::applyAllClassVars() -> sigma2(" << t << "," << i <<") " // << sigma2(t, i) <<" < sigma2min(" << sigma2min <<")! Setting to sigma2min." <<endl; sigma2( t, i ) = sigma2min; } */ sigma2( t, i ) = global_sigma2[i]; // ### global values } // for input_size /* cout << "***" << t << "***" << sumI[ t ] << " sur " << s_sumI << endl; cout << mu( t ) << endl; cout << sigma2( t ) << endl;*/ } // for target_cardinality }
void PLearn::NnlmOutputLayer::applyMuAndSigmaEmpiricalUpdate | ( | const Vec & | input | ) | const |
mu and sigma updates
empirical
Definition at line 988 of file NnlmOutputLayer.cc.
References el_last_update, PLearn::endl(), global_mu, global_sigma2, global_sumR, global_sumR2, i, PLearn::OnlineLearningModule::input_size, mu, PLERROR, s_sumI, sigma2, sigma2min, sumI, sumR, sumR2, and target.
Referenced by fprop().
{ // *** Update counts *** for(int i=0; i<input_size; i++) { s_sumI++; sumI[ target ]++; sumR( target, i ) += input[i]; sumR2( target, i ) += input[i]*input[i]; // ### for a global_sigma2 global_sumR[i] += input[i]; global_sumR2[i] += input[i]*input[i]; // ### for a global_sigma2 } // *** Intermediate values *** int n_ex_since_last_update = s_sumI - (int)el_last_update[target]; Vec old_mu; old_mu << mu(target); el_last_update[target] = sumI[ target ]; // *** Compute learning rate *** //real el_lr = el_start_learning_rate[target] / ( 1.0 + sumI[target] * el_decrease_constant[target] ); //cout << "el_lr " << el_lr << endl; // *** Update mu *** for(int i=0; i<input_size; i++) { mu( target, i ) = sumR( target, i ) / sumI[ target ]; //mu( target, i ) = (1.0-el_lr) * mu( target, i ) + el_lr * input[i]; // ### for a global_sigma2 global_mu[i] = global_sumR[i] / (real) s_sumI; } // *** Update sigma *** for(int i=0; i<input_size; i++) { // ### for a global_sigma2 // Diviser par (n-1) au lieu de n global_sigma2[i] = ( (real) s_sumI * global_mu[i] * global_mu[i] + global_sumR2[i] - 2.0 * global_mu[i] * global_sumR[i] ) / (s_sumI - 1); /* sigma2( target, i ) = (sumI[target]*mu(target, i)*mu(target, i) + sumR2(target,i) -2.0 * mu(target, i) * sumR(target, i) ) / (sumI[target]-1); */ sigma2( target, i ) = global_sigma2[i]; // ### for a global_sigma2 // Add reguralizer to compensate for the frequency at which the word is seen // TODO // old_mu // Enforce minimal sigma if(sigma2( target, i )<sigma2min) { cout << "<sigma2min!" << endl; sigma2( target, i ) = sigma2min; } if( isnan( sigma2( target, i ) ) ) { PLERROR( "NnlmOutputLayer::applyMuAndSigmaEmpiricalUpdate - isnan( sigma2( target, i ) )!\n" ); } } // Update uniform mixture coefficient //sum_log_p_g_r = logadd( sum_log_p_g_r, log_p_g_r ); //umc = safeexp( sum_log_p_g_r ) / s_sumI; }
void PLearn::NnlmOutputLayer::applyMuCandidateGradient | ( | int | c | ) | const |
MUST be called after the corresponding fprop.
Definition at line 1160 of file NnlmOutputLayer.cc.
References beta, dl_lr, i, PLearn::OnlineLearningModule::input_size, log_sum_p_ru, mu, pi, PLearn::safeexp(), PLearn::safelog(), and vec_log_p_rg_t.
Referenced by applyMuGradient().
{ // Vec bill( input_size ); Vec mu_gradient(input_size); for( int i=0; i<input_size; i++ ) { if( beta(c,i) > 0.0 ) { mu_gradient[i] = safeexp( safelog( pi[c] ) + vec_log_p_rg_t[c] + safelog( beta(c,i) ) - log_sum_p_ru ); } else { mu_gradient[i] = - safeexp( safelog( pi[c] ) + vec_log_p_rg_t[c] + safelog( -beta(c,i) ) - log_sum_p_ru ); } mu(c,i) -= dl_lr * mu_gradient[i]; //bill[i] = - dl_lr * mu_gradient[i]; } //cout << "MU candidate GRADIENT " << bill << endl; }
void PLearn::NnlmOutputLayer::applyMuGradient | ( | ) | const |
Compute and apply gradients of different costs with respect to mus.
MUST be called after the corresponding fprop Computes gradients of the non discriminant cost with respect to mu and sigma.
Definition at line 1066 of file NnlmOutputLayer.cc.
References applyMuCandidateGradient(), applyMuTargetGradient(), c, candidates, context, cost, COST_APPROX_DISCR, COST_DISCR, COST_NON_DISCR, dl_decrease_constant, dl_lr, dl_start_learning_rate, i, PLearn::OnlineLearningModule::input_size, PLearn::TVec< T >::length(), mu, nd_gradient, PLERROR, shared_candidates, step_number, target_cardinality, the_real_target, and u.
Referenced by bpropUpdate().
{ dl_lr = dl_start_learning_rate / ( 1.0 + dl_decrease_constant * step_number); if( cost == COST_NON_DISCR ) { Vec mu_gradient( input_size ); mu_gradient << nd_gradient; for( int i=0; i<input_size; i++ ) { mu_gradient[i] = - mu_gradient[i]; mu(the_real_target,i) -= dl_lr * mu_gradient[i]; } } else if( cost == COST_APPROX_DISCR ) { // for the target applyMuTargetGradient(); // --- for the others --- int c; // shared candidates for( int i=0; i< shared_candidates.length(); i++ ) { c = shared_candidates[i]; if( c != the_real_target ) { applyMuCandidateGradient(c); } } // context candidates for( int i=0; i< candidates[ context ].length(); i++ ) { c = candidates[ context ][i]; if( c != the_real_target ) { applyMuCandidateGradient(c); } } } else if( cost == COST_DISCR ) { applyMuTargetGradient(); for( int u=0; u< target_cardinality; u++ ) { if( u != the_real_target ) { applyMuCandidateGradient(u); } } } else { PLERROR("NnlmOutputLayer::applyMuGradient - invalid cost\n"); } }
void PLearn::NnlmOutputLayer::applyMuTargetGradient | ( | ) | const |
MUST be called after the corresponding fprop.
Definition at line 1130 of file NnlmOutputLayer.cc.
References beta, dl_lr, i, PLearn::OnlineLearningModule::input_size, log_sum_p_ru, mu, nd_gradient, pi, PLearn::safeexp(), PLearn::safelog(), the_real_target, and vec_log_p_rg_t.
Referenced by applyMuGradient().
{ // Vec bill( input_size ); Vec mu_gradient( input_size ); mu_gradient << nd_gradient; for( int i=0; i<input_size; i++ ) { mu_gradient[i] = - mu_gradient[i]; if( beta(the_real_target,i) > 0.0 ) { mu_gradient[i] += safeexp( safelog( pi[the_real_target] ) + vec_log_p_rg_t[the_real_target] + safelog( beta(the_real_target,i) ) - log_sum_p_ru ); } else { mu_gradient[i] -= safeexp( safelog( pi[the_real_target] ) + vec_log_p_rg_t[the_real_target] + safelog( -beta(the_real_target,i) ) - log_sum_p_ru ); } mu(the_real_target,i) -= dl_lr * mu_gradient[i]; //bill[i] = mu_gradient[i]; } //cout << "MU target GRADIENT " << bill << endl; }
void PLearn::NnlmOutputLayer::applySigmaCandidateGradient | ( | int | c | ) | const |
Definition at line 1272 of file NnlmOutputLayer.cc.
References beta, c, dl_lr, i, PLearn::OnlineLearningModule::input_size, log_sum_p_ru, pi, PLearn::safeexp(), sigma2, sigma2min, and vec_log_p_rg_t.
Referenced by applySigmaGradient().
{ // Vec bob( input_size ); Vec sigma2_gradient( input_size ); real tmp2 = 0.5 * pi[c] * safeexp( vec_log_p_rg_t[ c ] - log_sum_p_ru ); real tmp3; for( int i=0; i<input_size; i++ ) { tmp3 = beta(c,i) * beta(c,i) - 1.0/sigma2(c,i); sigma2_gradient[i] = tmp2 * tmp3; sigma2(c,i) -= dl_lr * sigma2_gradient[i]; // Enforce minimal sigma if(sigma2( c, i )<sigma2min) { sigma2( c, i ) = sigma2min; } //bob[i] = - dl_lr * sigma2_gradient[i]; } //cout << "SIGMA candidate GRADIENT " << bob << endl; }
void PLearn::NnlmOutputLayer::applySigmaGradient | ( | ) | const |
Compute and apply gradients of different costs with respect to sigmas.
Definition at line 1184 of file NnlmOutputLayer.cc.
References applySigmaCandidateGradient(), applySigmaTargetGradient(), beta, c, candidates, context, cost, COST_APPROX_DISCR, COST_DISCR, COST_NON_DISCR, dl_decrease_constant, dl_lr, dl_start_learning_rate, i, PLearn::OnlineLearningModule::input_size, PLearn::TVec< T >::length(), PLERROR, PLearn::safeexp(), shared_candidates, sigma2, step_number, target_cardinality, the_real_target, u, vec_log_p_r_t, and vec_log_p_rg_t.
Referenced by bpropUpdate().
{ dl_lr = dl_start_learning_rate / ( 1.0 + dl_decrease_constant * step_number); Vec sigma2_gradient( input_size ); if( cost == COST_NON_DISCR ) { real tmp = -0.5 * safeexp( vec_log_p_rg_t[ the_real_target ] - vec_log_p_r_t[ the_real_target ] ); for( int i=0; i<input_size; i++ ) { sigma2_gradient[i] = tmp * ( beta(the_real_target,i) * beta(the_real_target,i) - 1.0/sigma2(the_real_target,i) ); sigma2(the_real_target,i) -= dl_lr * sigma2_gradient[i]; } } else if( cost == COST_APPROX_DISCR ) { applySigmaTargetGradient(); // --- for the others --- int c; // shared candidates for( int i=0; i< shared_candidates.length(); i++ ) { c = shared_candidates[i]; if( c != the_real_target ) { applySigmaCandidateGradient(c); } } // context candidates for( int i=0; i< candidates[ context ].length(); i++ ) { c = candidates[ context ][i]; if( c != the_real_target ) { applySigmaCandidateGradient(c); } } } else if( cost == COST_DISCR ) { applySigmaTargetGradient(); for( int u=0; u< target_cardinality; u++ ) { if( u != the_real_target ) { applySigmaCandidateGradient(u); } } } else { PLERROR("NnlmOutputLayer::applySigmaGradient - invalid cost\n"); } }
void PLearn::NnlmOutputLayer::applySigmaTargetGradient | ( | ) | const |
Definition at line 1244 of file NnlmOutputLayer.cc.
References beta, dl_lr, i, PLearn::OnlineLearningModule::input_size, log_sum_p_ru, pi, PLearn::safeexp(), sigma2, sigma2min, the_real_target, vec_log_p_r_t, and vec_log_p_rg_t.
Referenced by applySigmaGradient().
{ // Vec bob( input_size ); Vec sigma2_gradient( input_size ); real tmp = -0.5 * safeexp( vec_log_p_rg_t[ the_real_target ] - vec_log_p_r_t[ the_real_target ] ); real tmp2 = 0.5 * pi[the_real_target] * safeexp( vec_log_p_rg_t[ the_real_target ] - log_sum_p_ru ); real tmp3; for( int i=0; i<input_size; i++ ) { tmp3 = beta(the_real_target,i) * beta(the_real_target,i) - 1.0/sigma2(the_real_target,i); sigma2_gradient[i] = tmp * tmp3; sigma2_gradient[i] += tmp2 * tmp3; sigma2(the_real_target,i) -= dl_lr * sigma2_gradient[i]; // Enforce minimal sigma if(sigma2( the_real_target, i )<sigma2min) { sigma2( the_real_target, i ) = sigma2min; } //bob[i] = sigma2_gradient[i]; } //cout << "SIGMA target GRADIENT " << bob << endl; }
void PLearn::NnlmOutputLayer::bpropUpdate | ( | const Vec & | input, |
const Vec & | output, | ||
Vec & | input_gradient, | ||
const Vec & | output_gradient | ||
) | [virtual] |
Adapt based on the output gradient: this method should only be called just after a corresponding fprop; it should be called with the same arguments as fprop for the first two arguments (and output should not have been modified since then).
Since sub-classes are supposed to learn ONLINE, the object is 'ready-to-be-used' just after any bpropUpdate. NOT IMPLEMENTED - GRADIENT COMPUTED IN NnlmOnlineLearner And I'm not sure why... TODO find out this version allows to obtain the input gradient as well N.B. THE DEFAULT IMPLEMENTATION IN SUPER-CLASS JUST RAISES A PLERROR.
Since sub-classes are supposed to learn ONLINE, the object is 'ready-to-be-used' just after any bpropUpdate. N.B. A DEFAULT IMPLEMENTATION IS PROVIDED IN THE SUPER-CLASS, WHICH JUST CALLS bpropUpdate(input, output, input_gradient, output_gradient) AND IGNORES INPUT GRADIENT. this version allows to obtain the input gradient as well N.B. THE DEFAULT IMPLEMENTATION IN SUPER-CLASS JUST RAISES A PLERROR.
Definition at line 909 of file NnlmOutputLayer.cc.
References ad_gradient, applyMuGradient(), applySigmaGradient(), computeApproxDiscriminantGradient(), computeDiscriminantGradient(), computeNonDiscriminantGradient(), cost, COST_APPROX_DISCR, COST_DISCR, COST_NON_DISCR, fd_gradient, i, PLearn::OnlineLearningModule::input_size, learning, LEARNING_DISCRIMINANT, nd_gradient, PLearn::OnlineLearningModule::output_size, PLERROR, and PLearn::TVec< T >::size().
{ int in_size = input.size(); int out_size = output.size(); int og_size = output_gradient.size(); // *** Sanity checks if( in_size != input_size ) { PLERROR("NnlmOutputLayer::bpropUpdate:'input.size()' should be equal\n" " to 'input_size' (%i != %i)\n", in_size, input_size); } else if( out_size != output_size ) { PLERROR("NnlmOutputLayer::bpropUpdate:'output.size()' should be" " equal\n" " to 'output_size' (%i != %i)\n", out_size, output_size); } else if( og_size != output_size ) { PLERROR("NnlmOutputLayer::bpropUpdate:'output_gradient.size()'" " should\n" " be equal to 'output_size' (%i != %i)\n", og_size, output_size); } // *** Compute input_gradient *** // *** Compute input_gradient *** if( cost == COST_NON_DISCR ) { computeNonDiscriminantGradient(); input_gradient << nd_gradient; } else if( cost == COST_APPROX_DISCR ) { computeApproxDiscriminantGradient(); input_gradient << ad_gradient; } else if( cost == COST_DISCR ) { computeDiscriminantGradient(); input_gradient << fd_gradient; } else { PLERROR("NnlmOutputLayer::bpropUpdate - invalid cost\n"); } // cout << "NnlmOutputLayer::bpropUpdate -> input_gradient " << input_gradient << endl; #ifdef BOUNDCHECK for(int i=0; i<input_size; i++) { if( isnan(input_gradient[i]) ) { PLERROR( "NnlmOutputLayer::bpropUpdate - isnan(input_gradient[i]) true.\n" ); } } #endif // *** Discriminant learning of mu and sigma *** // *** Discriminant learning of mu and sigma *** if( learning == LEARNING_DISCRIMINANT ) { applyMuGradient(); applySigmaGradient(); } // *** Empirical learning of mu and sigma *** // *** Empirical learning of mu and sigma *** //if( learning == LEARNING_EMPIRICAL ) { // applyMuAndSigmaEmpirical(); //} }
void PLearn::NnlmOutputLayer::build | ( | ) | [virtual] |
Post-constructor.
The normal implementation should call simply inherited::build(), then this class's build_(). This method should be callable again at later times, after modifying some option fields to change the "architecture" of the object.
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 180 of file NnlmOutputLayer.cc.
References PLearn::OnlineLearningModule::build(), and build_().
{ inherited::build(); build_(); }
void PLearn::NnlmOutputLayer::build_ | ( | ) | [private] |
This does the actual building.
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 189 of file NnlmOutputLayer.cc.
References PLearn::OnlineLearningModule::input_size, mu, PLearn::OnlineLearningModule::output_size, PLERROR, resetParameters(), and PLearn::TMat< T >::size().
Referenced by build().
{ // *** Sanity checks *** if( input_size <= 0 ) { PLERROR("NnlmOutputLayer::build_: 'input_size' <= 0 (%i).\n" "You should set it to a positive integer.\n", input_size); } else if( output_size != 1 ) { PLERROR("NnlmOutputLayer::build_: 'output_size'(=%i) != 1\n" , output_size); } // *** Parameters not initialized *** if( mu.size() == 0 ) { resetParameters(); } }
string PLearn::NnlmOutputLayer::classname | ( | ) | const [virtual] |
Computes the approximation -log( p(t|r) ) using only some candidates for normalization.
Computes the approximate discriminant cost.
Definition at line 716 of file NnlmOutputLayer.cc.
References c, candidates, compute_nl_p_rt(), context, i, PLearn::TVec< T >::length(), log_sum_p_ru, PLearn::logadd(), PLERROR, PLearn::TVec< T >::resize(), setTarget(), shared_candidates, the_real_target, and vec_log_p_rt.
Referenced by fprop().
{ // *** Compute for the target *** Vec vec_nd_cost(1); compute_nl_p_rt(input, vec_nd_cost); //nd_cost = -log_p_rt; // *** Compute for the normalization candidates *** Vec nl_p_ru; nl_p_ru.resize( 1 ); log_sum_p_ru = vec_log_p_rt[the_real_target]; int c; // shared candidates for( int i=0; i< shared_candidates.length(); i++ ) { c = shared_candidates[i]; if( c!=the_real_target ) { setTarget( c ); compute_nl_p_rt( input, nl_p_ru ); log_sum_p_ru = logadd(log_sum_p_ru, -nl_p_ru[0]); } } // context candidates for( int i=0; i< candidates[ context ].length(); i++ ) { c = candidates[ context ][i]; if( c!=the_real_target ) { setTarget( c ); compute_nl_p_rt( input, nl_p_ru ); log_sum_p_ru = logadd(log_sum_p_ru, -nl_p_ru[0]); } } // *** The approximate discriminant cost *** output[0] = vec_nd_cost[0] + log_sum_p_ru; #ifdef BOUNDCHECK if( isnan(output[0]) ) { PLERROR( "NnlmOutputLayer::compute_approx_nl_p_t_r - NAN present.\n" ); } #endif }
optionally perform some processing after training, or after a series of fprop/bpropUpdate calls to prepare the model for truly out-of-sample operation.
Computes -log( p(r,t) ) = -log( ( umc p_gaussian(r|t) + (1-umc) p_uniform(r|t) ) p(t) )
THE DEFAULT IMPLEMENTATION PROVIDED IN THE SUPER-CLASS DOES NOT DO ANYTHING. Computes -log( p(r,t) )
Definition at line 558 of file NnlmOutputLayer.cc.
References beta, g_exponent, i, PLearn::OnlineLearningModule::input_size, log_g_det_covariance, log_g_normalization, PLearn::logadd(), mu, pi, Pi, PLERROR, s, PLearn::safelog(), sigma2, PLearn::TVec< T >::size(), target, umc, vec_log_p_r_t, vec_log_p_rg_t, and vec_log_p_rt.
Referenced by compute_approx_nl_p_t_r(), compute_nl_p_t_r(), fprop(), and getBestCandidates().
{ // *** Sanity check *** int in_size = input.size(); if( in_size != input_size ) { PLERROR("NnlmOutputLayer::compute_nl_p_rt: 'input.size()' should be equal\n" " to 'input_size' (%i != %i)\n", in_size, input_size); } // *** Compute gaussian's exponent - 'g' means gaussian *** // NOTE \Sigma is a diagonal matrix, ie det() = \Prod and inverse is 1/... g_exponent = 0.0; log_g_det_covariance = 0.0; //cout << "**** s "; for(int i=0; i<input_size; i++) { //cout << "g_exponent " << g_exponent << endl; // s = r[i] - mu_t[i] s = input[i] - mu(target, i); //cout << s ; // memorize this calculation for gradients computation beta(target, i) = s / sigma2(target, i); g_exponent += s * beta(target, i); // determinant of covariance matrix log_g_det_covariance += safelog( sigma2(target, i) ); } //cout << endl; g_exponent *= -0.5; // ### Should we use logs here? //cout << "g_exponent " << g_exponent << " log_g_det_covariance " << log_g_det_covariance << endl; #ifdef BOUNDCHECK if( isnan(g_exponent) || isnan(log_g_det_covariance) ) { PLERROR( "NnlmOutputLayer::compute_nl_p_rt - NAN present.\n" ); } #endif // * Compute normalizing factor log_g_normalization = - 0.5 * ( (input_size) * safelog(2.0 * Pi) + log_g_det_covariance ); //cout << "log_g_normalization " << log_g_normalization << endl; // * Compute log p(r,g|t) = log( p(r|t,g) p(g) ) = log( umc p_gaussian(r|t) ) vec_log_p_rg_t[target] = safelog(umc) + g_exponent + log_g_normalization; //cout << "p(r,g|t) " << safeexp( vec_log_p_rg_t[target] ) << endl; // * Compute log p(r|t) = log( umc p_g(r|t) + (1-umc) p_u(r|t) ) vec_log_p_r_t[target] = logadd( vec_log_p_rg_t[target] , safelog(1.0-umc) - (input_size) * safelog(2.0)); //cout << "p_u " << safeexp( safelog(1.0-umc) - (input_size) * safelog(2.0) ) << endl; // * Compute log p(r,t) vec_log_p_rt[target] = safelog(pi[target]) + vec_log_p_r_t[target]; // * Compute output output[0] = - vec_log_p_rt[target]; //cout << "safeexp( vec_log_p_rt[target] ) " << safeexp( vec_log_p_rt[target] ) << endl; #ifdef BOUNDCHECK if( isnan(vec_log_p_rt[target]) ) { PLERROR( "NnlmOutputLayer::compute_nl_p_rt - NAN present.\n" ); } #endif // * Compute posterior for coeff_class_conditional_uniform_mixture evaluation in the bpropUpdate // p(generated by gaussian| r) = a p_g(r|i) / p(r|i) //log_p_g_r = safelog(umc) + g_exponent + log_g_normalization - log_p_r_i; }
Computes -log( p(t|r) )
Computes -log( p(t|r) ) = -log( p(r,t) / p(r,u) )
Definition at line 643 of file NnlmOutputLayer.cc.
References compute_nl_p_rt(), log_sum_p_ru, PLearn::logadd(), PLERROR, PLearn::TVec< T >::resize(), setTarget(), target_cardinality, and u.
Referenced by fprop().
{ Vec nl_p_rt; Vec nl_p_ru; nl_p_rt.resize( 1 ); nl_p_ru.resize( 1 ); // * Compute numerator compute_nl_p_rt( input, nl_p_rt ); // * Compute denominator // Normalize over whole vocabulary log_sum_p_ru = -REAL_MAX; for(int u=0; u<target_cardinality; u++) { setTarget( u ); compute_nl_p_rt( input, nl_p_ru ); log_sum_p_ru = logadd(log_sum_p_ru, -nl_p_ru[0]); } //cout << "log_p_rt[0] " << -nl_p_rt[0] << " log_sum_p_ru " << log_sum_p_ru << endl; output[0] = nl_p_rt[0] + log_sum_p_ru; //cout << "p_t_r " << safeexp( - output[0] ) << endl; #ifdef BOUNDCHECK if( isnan(output[0]) ) { PLERROR( "NnlmOutputLayer::compute_nl_p_t_r - NAN present.\n" ); } #endif }
void PLearn::NnlmOutputLayer::computeApproxDiscriminantGradient | ( | ) | const |
MUST be called after the corresponding fprop.
Definition at line 786 of file NnlmOutputLayer.cc.
References ad_gradient, addCandidateContribution(), c, candidates, computeNonDiscriminantGradient(), context, PLearn::TVec< T >::fill(), gradient_log_tmp, gradient_log_tmp_neg, gradient_log_tmp_pos, i, PLearn::OnlineLearningModule::input_size, j, PLearn::TVec< T >::length(), log_sum_p_ru, PLearn::logsub(), nd_gradient, PLearn::safeexp(), shared_candidates, and the_real_target.
Referenced by bpropUpdate().
{ gradient_log_tmp.fill(-REAL_MAX); gradient_log_tmp_pos.fill(-REAL_MAX); gradient_log_tmp_neg.fill(-REAL_MAX); // * Compute nd gradient computeNonDiscriminantGradient(); // * Compute ad specific term int c; // target addCandidateContribution( the_real_target ); // shared candidates for( int i=0; i< shared_candidates.length(); i++ ) { c = shared_candidates[i]; if( c != the_real_target ) addCandidateContribution( c ); } // context candidates for( int i=0; i< candidates[ context ].length(); i++ ) { c = candidates[ context ][i]; if( c != the_real_target ) addCandidateContribution( c ); } // *** The corresponding approx gradient *** for(int j=0; j<input_size; j++) { if( gradient_log_tmp_pos[j] > gradient_log_tmp_neg[j] ) { gradient_log_tmp[j] = logsub( gradient_log_tmp_pos[j], gradient_log_tmp_neg[j] ); ad_gradient[j] = nd_gradient[j] - safeexp( gradient_log_tmp[j] - log_sum_p_ru); } else { gradient_log_tmp[j] = logsub( gradient_log_tmp_neg[j], gradient_log_tmp_pos[j] ); ad_gradient[j] = nd_gradient[j] + safeexp( gradient_log_tmp[j] - log_sum_p_ru); } } }
void PLearn::NnlmOutputLayer::computeDiscriminantGradient | ( | ) | const |
MUST be called after the corresponding fprop.
Definition at line 835 of file NnlmOutputLayer.cc.
References addCandidateContribution(), computeNonDiscriminantGradient(), fd_gradient, PLearn::TVec< T >::fill(), gradient_log_tmp, gradient_log_tmp_neg, gradient_log_tmp_pos, PLearn::OnlineLearningModule::input_size, j, log_sum_p_ru, PLearn::logsub(), nd_gradient, PLearn::safeexp(), target_cardinality, and u.
Referenced by bpropUpdate().
{ gradient_log_tmp.fill(-REAL_MAX); gradient_log_tmp_pos.fill(-REAL_MAX); gradient_log_tmp_neg.fill(-REAL_MAX); // * Compute nd gradient computeNonDiscriminantGradient(); // * Compute ad specific term for( int u=0; u< target_cardinality; u++ ) { addCandidateContribution( u ); } // *** The corresponding approx gradient *** for(int j=0; j<input_size; j++) { if( gradient_log_tmp_pos[j] > gradient_log_tmp_neg[j] ) { gradient_log_tmp[j] = logsub( gradient_log_tmp_pos[j], gradient_log_tmp_neg[j] ); fd_gradient[j] = nd_gradient[j] - safeexp( gradient_log_tmp[j] - log_sum_p_ru); } else { gradient_log_tmp[j] = logsub( gradient_log_tmp_neg[j], gradient_log_tmp_pos[j] ); fd_gradient[j] = nd_gradient[j] + safeexp( gradient_log_tmp[j] - log_sum_p_ru); } } //cout << "===nd_gradient " << nd_gradient << endl; //cout << "---fd_gradient " << nd_gradient << endl; }
void PLearn::NnlmOutputLayer::computeEmpiricalLearningRateParameters | ( | ) |
Computes the word specific empirical learning rates MUST be called after a valid call to applyClassCounts()
Definition at line 431 of file NnlmOutputLayer.cc.
References el_decrease_constant, el_last_update, el_start_discount_factor, el_start_learning_rate, PLearn::TVec< T >::fill(), i, PLearn::pow(), PLearn::TVec< T >::resize(), s_sumI, sumI, and target_cardinality.
{ // *** Start learning rate *** // (1-slr)^n = el_start_discount_factor -> slr = 1 - (el_start_discount_factor)^{1/n} el_start_learning_rate.resize(target_cardinality); el_start_learning_rate.fill(1.0); for(int i=0; i<target_cardinality; i++) { el_start_learning_rate[i] -= pow( el_start_discount_factor, 1.0/sumI[i] ); } // *** Decrease constant *** el_decrease_constant.resize(target_cardinality); el_decrease_constant.fill(0.0); // *** To memorize the step of the last update to the word *** el_last_update.resize(target_cardinality); el_last_update.fill(s_sumI); }
void PLearn::NnlmOutputLayer::computeNonDiscriminantGradient | ( | ) | const |
Gradients with respect to input.
MUST be called after the corresponding fprop.
Definition at line 769 of file NnlmOutputLayer.cc.
References beta, i, PLearn::OnlineLearningModule::input_size, nd_gradient, PLearn::safeexp(), the_real_target, vec_log_p_r_t, and vec_log_p_rg_t.
Referenced by bpropUpdate(), computeApproxDiscriminantGradient(), and computeDiscriminantGradient().
{ //cout << "vec_log_p_rg_t[the_real_target] " << vec_log_p_rg_t[the_real_target] << " vec_log_p_r_t[the_real_target] " << vec_log_p_r_t[the_real_target] << endl; real tmp = safeexp( vec_log_p_rg_t[the_real_target] - vec_log_p_r_t[the_real_target] ); for(int i=0; i<input_size; i++) { nd_gradient[i] = beta( the_real_target, i) * tmp; } }
void PLearn::NnlmOutputLayer::declareOptions | ( | OptionList & | ol | ) | [static, protected] |
Declares the class options.
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 100 of file NnlmOutputLayer.cc.
References PLearn::OptionBase::buildoption, context_cardinality, PLearn::declareOption(), PLearn::OnlineLearningModule::declareOptions(), dl_decrease_constant, dl_start_learning_rate, el_start_discount_factor, PLearn::OptionBase::learntoption, mu, pi, s_sumI, sigma2, sigma2min, step_number, sumI, sumR, sumR2, target_cardinality, and umc.
{ // * Build Options * // * Build Options * declareOption(ol, "target_cardinality", &NnlmOutputLayer::target_cardinality, OptionBase::buildoption, "Number of target tags."); declareOption(ol, "context_cardinality", &NnlmOutputLayer::context_cardinality, OptionBase::buildoption, "Number of context tags (usually, there will be the additional 'missing' tag)."); declareOption(ol, "sigma2min", &NnlmOutputLayer::sigma2min, OptionBase::buildoption, "Minimal value for the diagonal covariance matrix."); declareOption(ol, "dl_start_learning_rate", &NnlmOutputLayer::dl_start_learning_rate, OptionBase::buildoption, "Discriminant learning start learning rate."); declareOption(ol, "dl_decrease_constant", &NnlmOutputLayer::dl_decrease_constant, OptionBase::buildoption, "Discriminant learning decrease constant."); declareOption(ol, "el_start_discount_factor", &NnlmOutputLayer::el_start_discount_factor, OptionBase::buildoption, "How much weight is given to the first example of a given word with respect to the last, ex 0,2."); /* declareOption(ol, "el_decrease_constant", &NnlmOutputLayer::el_decrease_constant, OptionBase::buildoption, "Empirical learning decrease constant of gaussian parameters discount rate."); */ // * Learnt Options * // * Learnt Options * declareOption(ol, "step_number", &NnlmOutputLayer::step_number, OptionBase::learntoption, "The step number, incremented after each update."); declareOption(ol, "umc", &NnlmOutputLayer::umc, OptionBase::learntoption, "The uniform mixture coefficient. p(r|i) = umc p_gauss + (1-umc) p_uniform"); declareOption(ol, "pi", &NnlmOutputLayer::pi, OptionBase::learntoption, "pi[t] -> moyenne empirique de y==t" ); declareOption(ol, "mu", &NnlmOutputLayer::mu, OptionBase::learntoption, "mu(t) -> moyenne empirique des r quand y==t" ); declareOption(ol, "sigma2", &NnlmOutputLayer::sigma2, OptionBase::learntoption, "sigma2(t) -> variance empirique des r quand y==t" ); declareOption(ol, "sumR", &NnlmOutputLayer::sumR, OptionBase::learntoption, "sumR(i) -> sum_t r_t 1_{y==i}" ); declareOption(ol, "sumR2", &NnlmOutputLayer::sumR2, OptionBase::learntoption, "sumR2(i) -> sum_t r_t^2 1_{y==i}" ); declareOption(ol, "sumI", &NnlmOutputLayer::sumI, OptionBase::learntoption, "sumI(i) -> sum_t 1_{y==i}" ); declareOption(ol, "s_sumI", &NnlmOutputLayer::s_sumI, OptionBase::learntoption, "sum_t 1" ); // ### other? // Now call the parent class' declareOptions inherited::declareOptions(ol); }
static const PPath& PLearn::NnlmOutputLayer::declaringFile | ( | ) | [inline, static] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 201 of file NnlmOutputLayer.h.
:
//##### Protected Member Functions ######################################
NnlmOutputLayer * PLearn::NnlmOutputLayer::deepCopy | ( | CopiesMap & | copies | ) | const [virtual] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 49 of file NnlmOutputLayer.cc.
{
void PLearn::NnlmOutputLayer::forget | ( | ) | [virtual] |
reset the parameters to the state they would be BEFORE starting training.
Note that this method is necessarily called from build().
Implements PLearn::OnlineLearningModule.
Definition at line 1298 of file NnlmOutputLayer.cc.
References PLearn::endl(), and resetParameters().
{ cout << "NnlmOutputLayer::forget()" << endl; resetParameters(); }
Computes 'cost' for the 'target' given the input, compute the output (possibly resize it appropriately)
given the input, compute the output (possibly resize it appropriately) The output is then computed from p(r,c) = p(r|c) * p(c):
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 524 of file NnlmOutputLayer.cc.
References applyMuAndSigmaEmpiricalUpdate(), compute_approx_nl_p_t_r(), compute_nl_p_rt(), compute_nl_p_t_r(), cost, COST_APPROX_DISCR, COST_DISCR, COST_NON_DISCR, is_learning, learning, LEARNING_EMPIRICAL, PLERROR, target, and the_real_target.
{ the_real_target = target; // *** In the case of empirical (max likelihood) learning of mu and sigma *** // we can update mu and sigma before computing the cost and backpropagating. if( (learning==LEARNING_EMPIRICAL) && is_learning ) { applyMuAndSigmaEmpiricalUpdate(input); } // *** Non-discriminant cost: -log( p(r,t) ) *** if( cost == COST_NON_DISCR ) { compute_nl_p_rt( input, output ); } // *** Approx-discriminant cost *** else if( cost == COST_APPROX_DISCR ) { compute_approx_nl_p_t_r( input, output ); } // *** Discriminant cost: -log( p(t|r) ) *** else if( cost == COST_DISCR ) { compute_nl_p_t_r( input, output ); } else { PLERROR("NnlmOutputLayer::fprop - invalid cost\n"); } }
void PLearn::NnlmOutputLayer::getBestCandidates | ( | const Vec & | input, |
Vec & | candidate_tags, | ||
Vec & | probabilities | ||
) | const |
returns best candidates according to compute_nl_p_t_r
May be called after compute_nl_p_t_r to find out which words get highest probability according to the model.
Definition at line 682 of file NnlmOutputLayer.cc.
References PLearn::TVec< T >::clear(), compute_nl_p_rt(), i, log_sum_p_ru, PLearn::TVec< T >::resize(), PLearn::safeexp(), setTarget(), target_cardinality, u, and PLearn::wordAndProbGT().
{ candidate_tags.resize(10); probabilities.resize(10); std::vector< wordAndProb > tmp; Vec nl_p_ru(1); for(int u=0; u<target_cardinality; u++) { setTarget( u ); compute_nl_p_rt( input, nl_p_ru ); tmp.push_back( wordAndProb( u, safeexp( - (nl_p_ru[0] + log_sum_p_ru) ) ) ); } std::sort(tmp.begin(), tmp.end(), wordAndProbGT); // HACK we don't check if itr has hit the end... unlikely target_cardinality is smaller than 10 std::vector< wordAndProb >::iterator itr_vec; itr_vec=tmp.begin(); for(int i=0; i<10; i++) { candidate_tags[i] = itr_vec->wordtag; probabilities[i] = itr_vec->probability; itr_vec++; } tmp.clear(); }
OptionList & PLearn::NnlmOutputLayer::getOptionList | ( | ) | const [virtual] |
OptionMap & PLearn::NnlmOutputLayer::getOptionMap | ( | ) | const [virtual] |
RemoteMethodMap & PLearn::NnlmOutputLayer::getRemoteMethodMap | ( | ) | const [virtual] |
void PLearn::NnlmOutputLayer::makeDeepCopyFromShallowCopy | ( | CopiesMap & | copies | ) | [virtual] |
Transforms a shallow copy into a deep copy.
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 211 of file NnlmOutputLayer.cc.
References ad_gradient, beta, PLearn::deepCopyField(), el_decrease_constant, el_last_update, el_start_learning_rate, fd_gradient, gradient_log_tmp, gradient_log_tmp_neg, gradient_log_tmp_pos, PLearn::OnlineLearningModule::makeDeepCopyFromShallowCopy(), mu, nd_gradient, pi, sigma2, sumI, sumR, sumR2, vec_log_p_r_t, vec_log_p_rg_t, and vec_log_p_rt.
{ inherited::makeDeepCopyFromShallowCopy(copies); deepCopyField(pi, copies); deepCopyField(mu, copies); deepCopyField(sigma2, copies); deepCopyField(sumI, copies); deepCopyField(sumR, copies); deepCopyField(sumR2, copies); deepCopyField(el_start_learning_rate, copies); deepCopyField(el_decrease_constant, copies); deepCopyField(el_last_update, copies); deepCopyField(vec_log_p_rg_t, copies); deepCopyField(vec_log_p_r_t, copies); deepCopyField(vec_log_p_rt, copies); deepCopyField(beta, copies); deepCopyField(nd_gradient, copies); deepCopyField(ad_gradient, copies); deepCopyField(fd_gradient, copies); deepCopyField(gradient_log_tmp, copies); deepCopyField(gradient_log_tmp_pos, copies); deepCopyField(gradient_log_tmp_neg, copies); }
void PLearn::NnlmOutputLayer::resetAllClassVars | ( | ) |
Used for initializing s_sumI, sumI, sumR, sumR2, as well as pi, mu and sigma2 to the max likelihood values.
Definition at line 306 of file NnlmOutputLayer.cc.
References PLearn::endl(), PLearn::TVec< T >::fill(), PLearn::TMat< T >::fill(), global_sumR, global_sumR2, PLearn::OnlineLearningModule::input_size, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), s_sumI, sumI, sumR, sumR2, and target_cardinality.
Referenced by resetParameters().
{ cout << "NnlmOutputLayer::resetAllClassVars()" << endl; s_sumI = 0; sumI.resize( target_cardinality ); sumI.fill( 0 ); sumR.resize( target_cardinality, input_size); sumR.fill( 0.0 ); sumR2.resize( target_cardinality, input_size); sumR2.fill( 0.0 ); // ### for a global_sigma2 global_sumR.resize(input_size); global_sumR.fill( 0.0 ); global_sumR2.resize(input_size); global_sumR2.fill( 0.0 ); // ### for a global_sigma2 }
void PLearn::NnlmOutputLayer::resetParameters | ( | ) |
Resizes variables and sets pretty much everything back to a 'zero' value.
Definition at line 248 of file NnlmOutputLayer.cc.
References ad_gradient, beta, bill, bob, PLearn::endl(), fd_gradient, PLearn::TVec< T >::fill(), PLearn::TMat< T >::fill(), global_mu, global_sigma2, gradient_log_tmp, gradient_log_tmp_neg, gradient_log_tmp_pos, PLearn::OnlineLearningModule::input_size, mu, nd_gradient, pi, resetAllClassVars(), PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), sigma2, step_number, target_cardinality, umc, vec_log_p_r_t, vec_log_p_rg_t, and vec_log_p_rt.
Referenced by build_(), and forget().
{ cout << "NnlmOutputLayer::resetParameters()" << endl; step_number = 0; umc = 0.999999; // ### pi.resize( target_cardinality ); pi.fill( 0.0 ); mu.resize( target_cardinality, input_size); mu.fill( 0.0 ); sigma2.resize( target_cardinality, input_size); sigma2.fill( 0.0 ); // ### for a global_sigma2 global_mu.resize(input_size); global_mu.fill( 0.0 ); global_sigma2.resize(input_size); global_sigma2.fill( 0.0 ); // ### for a global_sigma2 resetAllClassVars(); vec_log_p_rg_t.resize( target_cardinality ); vec_log_p_r_t.resize( target_cardinality ); vec_log_p_rt.resize( target_cardinality ); beta.resize( target_cardinality, input_size ); nd_gradient.resize( input_size ); nd_gradient.fill( 0.0 ); ad_gradient.resize( input_size ); ad_gradient.fill( 0.0 ); fd_gradient.resize( input_size ); fd_gradient.fill( 0.0 ); bill.resize( input_size ); bill.fill( 0.0 ); bob.resize( input_size ); bob.fill( 0.0 ); gradient_log_tmp.resize( input_size ); gradient_log_tmp.fill( 0.0 ); gradient_log_tmp_pos.resize( input_size ); gradient_log_tmp_pos.fill( 0.0 ); gradient_log_tmp_neg.resize( input_size ); gradient_log_tmp_neg.fill( 0.0 ); //log_p_g_r = safelog( 0.9 ); //sum_log_p_g_r = -REAL_MAX; }
void PLearn::NnlmOutputLayer::setContext | ( | int | the_context | ) | const |
Sets the context. The Candidates set of the approximated discriminant cost is determined from the context.
Definition at line 469 of file NnlmOutputLayer.cc.
References context, context_cardinality, and PLERROR.
{ #ifdef BOUNDCHECK if( the_context >= context_cardinality ) { PLERROR("NnlmOutputLayer::setContext:'the_context'(=%i) >= 'context_cardinality'(=%i)\n" , the_context, context_cardinality); } #endif context = the_context; }
void PLearn::NnlmOutputLayer::setCost | ( | int | the_cost | ) |
void PLearn::NnlmOutputLayer::setLearning | ( | int | the_learning | ) |
void PLearn::NnlmOutputLayer::setTarget | ( | int | the_target | ) | const |
Sets t, the target.
Definition at line 454 of file NnlmOutputLayer.cc.
References PLERROR, target, and target_cardinality.
Referenced by compute_approx_nl_p_t_r(), compute_nl_p_t_r(), and getBestCandidates().
{ #ifdef BOUNDCHECK if( the_target >= target_cardinality ) { PLERROR("NnlmOutputLayer::setTarget:'the_target'(=%i) >= 'target_cardinality'(=%i)\n", the_target, target_cardinality); } #endif target = the_target; }
Definition at line 330 of file NnlmOutputLayer.cc.
References global_sumR, global_sumR2, i, PLearn::OnlineLearningModule::input_size, PLERROR, s_sumI, sumI, sumR, sumR2, and target_cardinality.
{ #ifdef BOUNDCHECK if( the_target >= target_cardinality ) { PLERROR("NnlmOutputLayer::updateClassVars:'the_target'(=%i) >= 'target_cardinality'(=%i)\n", the_target, target_cardinality); } #endif s_sumI++; sumI[the_target]++; for(int i=0; i<input_size; i++) { sumR( the_target, i ) += the_input[i]; sumR2( the_target, i ) += the_input[i]*the_input[i]; // ### for a global_sigma2 global_sumR[i] += the_input[i]; global_sumR2[i] += the_input[i]*the_input[i]; // ### for a global_sigma2 } }
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 201 of file NnlmOutputLayer.h.
Vec PLearn::NnlmOutputLayer::ad_gradient [mutable] |
Definition at line 307 of file NnlmOutputLayer.h.
Referenced by bpropUpdate(), computeApproxDiscriminantGradient(), makeDeepCopyFromShallowCopy(), and resetParameters().
Mat PLearn::NnlmOutputLayer::beta [mutable] |
Definition at line 301 of file NnlmOutputLayer.h.
Referenced by addCandidateContribution(), applyMuCandidateGradient(), applyMuTargetGradient(), applySigmaCandidateGradient(), applySigmaGradient(), applySigmaTargetGradient(), compute_nl_p_rt(), computeNonDiscriminantGradient(), makeDeepCopyFromShallowCopy(), and resetParameters().
Vec PLearn::NnlmOutputLayer::bill [mutable] |
Definition at line 310 of file NnlmOutputLayer.h.
Referenced by resetParameters().
Vec PLearn::NnlmOutputLayer::bob [mutable] |
Definition at line 311 of file NnlmOutputLayer.h.
Referenced by resetParameters().
Definition at line 263 of file NnlmOutputLayer.h.
Referenced by applyMuGradient(), applySigmaGradient(), compute_approx_nl_p_t_r(), and computeApproxDiscriminantGradient().
int PLearn::NnlmOutputLayer::context [mutable] |
Definition at line 285 of file NnlmOutputLayer.h.
Referenced by applyMuGradient(), applySigmaGradient(), compute_approx_nl_p_t_r(), computeApproxDiscriminantGradient(), and setContext().
specifies the range of the values of 'context' (ex: + 'missing' tag)
Definition at line 81 of file NnlmOutputLayer.h.
Referenced by declareOptions(), and setContext().
Must be set before calling fprop.
the cost
Definition at line 280 of file NnlmOutputLayer.h.
Referenced by applyMuGradient(), applySigmaGradient(), bpropUpdate(), fprop(), and setCost().
Definition at line 88 of file NnlmOutputLayer.h.
Referenced by applyMuGradient(), applySigmaGradient(), and declareOptions().
real PLearn::NnlmOutputLayer::dl_lr [mutable] |
Definition at line 327 of file NnlmOutputLayer.h.
Referenced by applyMuCandidateGradient(), applyMuGradient(), applyMuTargetGradient(), applySigmaCandidateGradient(), applySigmaGradient(), and applySigmaTargetGradient().
Discriminant learning (of and
) - dl.
Definition at line 87 of file NnlmOutputLayer.h.
Referenced by applyMuGradient(), applySigmaGradient(), and declareOptions().
Definition at line 318 of file NnlmOutputLayer.h.
Referenced by computeEmpiricalLearningRateParameters(), and makeDeepCopyFromShallowCopy().
real PLearn::NnlmOutputLayer::el_dr [mutable] |
The original way of computing the mus and sigmas (ex.
mu memorize r and then divide) had the effect learning slowed down with time. We use this discount rate now. TODO validate computation of mus and sigmas gaussian_learning_discount_rate
Definition at line 326 of file NnlmOutputLayer.h.
Definition at line 319 of file NnlmOutputLayer.h.
Referenced by applyMuAndSigmaEmpiricalUpdate(), computeEmpiricalLearningRateParameters(), and makeDeepCopyFromShallowCopy().
Empirical learning (of and
) - el.
Definition at line 94 of file NnlmOutputLayer.h.
Referenced by computeEmpiricalLearningRateParameters(), and declareOptions().
Definition at line 317 of file NnlmOutputLayer.h.
Referenced by computeEmpiricalLearningRateParameters(), and makeDeepCopyFromShallowCopy().
Vec PLearn::NnlmOutputLayer::fd_gradient [mutable] |
Definition at line 308 of file NnlmOutputLayer.h.
Referenced by bpropUpdate(), computeDiscriminantGradient(), makeDeepCopyFromShallowCopy(), and resetParameters().
real PLearn::NnlmOutputLayer::g_exponent [mutable] |
Definition at line 291 of file NnlmOutputLayer.h.
Referenced by compute_nl_p_rt().
Definition at line 246 of file NnlmOutputLayer.h.
Referenced by applyAllClassVars(), applyMuAndSigmaEmpiricalUpdate(), and resetParameters().
Definition at line 247 of file NnlmOutputLayer.h.
Referenced by applyAllClassVars(), applyMuAndSigmaEmpiricalUpdate(), and resetParameters().
Definition at line 257 of file NnlmOutputLayer.h.
Referenced by applyAllClassVars(), applyMuAndSigmaEmpiricalUpdate(), resetAllClassVars(), and updateClassVars().
Definition at line 258 of file NnlmOutputLayer.h.
Referenced by applyAllClassVars(), applyMuAndSigmaEmpiricalUpdate(), resetAllClassVars(), and updateClassVars().
Vec PLearn::NnlmOutputLayer::gradient_log_tmp [mutable] |
Definition at line 313 of file NnlmOutputLayer.h.
Referenced by computeApproxDiscriminantGradient(), computeDiscriminantGradient(), makeDeepCopyFromShallowCopy(), and resetParameters().
Definition at line 315 of file NnlmOutputLayer.h.
Referenced by addCandidateContribution(), computeApproxDiscriminantGradient(), computeDiscriminantGradient(), makeDeepCopyFromShallowCopy(), and resetParameters().
Definition at line 314 of file NnlmOutputLayer.h.
Referenced by addCandidateContribution(), computeApproxDiscriminantGradient(), computeDiscriminantGradient(), makeDeepCopyFromShallowCopy(), and resetParameters().
Definition at line 329 of file NnlmOutputLayer.h.
Referenced by fprop().
Definition at line 275 of file NnlmOutputLayer.h.
Referenced by bpropUpdate(), fprop(), and setLearning().
Definition at line 292 of file NnlmOutputLayer.h.
Referenced by compute_nl_p_rt().
Definition at line 293 of file NnlmOutputLayer.h.
Referenced by compute_nl_p_rt().
real PLearn::NnlmOutputLayer::log_sum_p_ru [mutable] |
Definition at line 298 of file NnlmOutputLayer.h.
Referenced by applyMuCandidateGradient(), applyMuTargetGradient(), applySigmaCandidateGradient(), applySigmaTargetGradient(), compute_approx_nl_p_t_r(), compute_nl_p_t_r(), computeApproxDiscriminantGradient(), computeDiscriminantGradient(), and getBestCandidates().
Gaussian parameters - p_g(r|c)
Definition at line 243 of file NnlmOutputLayer.h.
Referenced by applyAllClassVars(), applyMuAndSigmaEmpiricalUpdate(), applyMuCandidateGradient(), applyMuGradient(), applyMuTargetGradient(), build_(), compute_nl_p_rt(), declareOptions(), makeDeepCopyFromShallowCopy(), and resetParameters().
Vec PLearn::NnlmOutputLayer::nd_gradient [mutable] |
Definition at line 306 of file NnlmOutputLayer.h.
Referenced by applyMuGradient(), applyMuTargetGradient(), bpropUpdate(), computeApproxDiscriminantGradient(), computeDiscriminantGradient(), computeNonDiscriminantGradient(), makeDeepCopyFromShallowCopy(), and resetParameters().
pi(i) = empirical mean when of c==i, ie p(c)
Definition at line 240 of file NnlmOutputLayer.h.
Referenced by addCandidateContribution(), applyAllClassVars(), applyMuCandidateGradient(), applyMuTargetGradient(), applySigmaCandidateGradient(), applySigmaTargetGradient(), compute_nl_p_rt(), declareOptions(), makeDeepCopyFromShallowCopy(), and resetParameters().
real PLearn::NnlmOutputLayer::s [mutable] |
Definition at line 290 of file NnlmOutputLayer.h.
Referenced by compute_nl_p_rt().
int PLearn::NnlmOutputLayer::s_sumI [mutable] |
EMPIRICAL LEARNING Intermediaries.
Definition at line 251 of file NnlmOutputLayer.h.
Referenced by applyAllClassVars(), applyMuAndSigmaEmpiricalUpdate(), computeEmpiricalLearningRateParameters(), declareOptions(), resetAllClassVars(), and updateClassVars().
Holds candidates.
Definition at line 262 of file NnlmOutputLayer.h.
Referenced by applyMuGradient(), applySigmaGradient(), compute_approx_nl_p_t_r(), and computeApproxDiscriminantGradient().
Definition at line 244 of file NnlmOutputLayer.h.
Referenced by applyAllClassVars(), applyMuAndSigmaEmpiricalUpdate(), applySigmaCandidateGradient(), applySigmaGradient(), applySigmaTargetGradient(), compute_nl_p_rt(), declareOptions(), makeDeepCopyFromShallowCopy(), and resetParameters().
minimal value can have
Definition at line 84 of file NnlmOutputLayer.h.
Referenced by applyAllClassVars(), applyMuAndSigmaEmpiricalUpdate(), applySigmaCandidateGradient(), applySigmaTargetGradient(), and declareOptions().
keeps track of updates
Definition at line 230 of file NnlmOutputLayer.h.
Referenced by applyMuGradient(), applySigmaGradient(), declareOptions(), and resetParameters().
Definition at line 252 of file NnlmOutputLayer.h.
Referenced by applyAllClassVars(), applyMuAndSigmaEmpiricalUpdate(), computeEmpiricalLearningRateParameters(), declareOptions(), makeDeepCopyFromShallowCopy(), resetAllClassVars(), and updateClassVars().
Definition at line 254 of file NnlmOutputLayer.h.
Referenced by applyAllClassVars(), applyMuAndSigmaEmpiricalUpdate(), declareOptions(), makeDeepCopyFromShallowCopy(), resetAllClassVars(), and updateClassVars().
Definition at line 255 of file NnlmOutputLayer.h.
Referenced by applyMuAndSigmaEmpiricalUpdate(), declareOptions(), makeDeepCopyFromShallowCopy(), resetAllClassVars(), and updateClassVars().
int PLearn::NnlmOutputLayer::target [mutable] |
the current word -> we use its parameters to compute output
Definition at line 283 of file NnlmOutputLayer.h.
Referenced by applyMuAndSigmaEmpiricalUpdate(), compute_nl_p_rt(), fprop(), and setTarget().
specifies the range of the values of 'target'
Definition at line 78 of file NnlmOutputLayer.h.
Referenced by applyAllClassVars(), applyMuGradient(), applySigmaGradient(), compute_nl_p_t_r(), computeDiscriminantGradient(), computeEmpiricalLearningRateParameters(), declareOptions(), getBestCandidates(), resetAllClassVars(), resetParameters(), setTarget(), and updateClassVars().
int PLearn::NnlmOutputLayer::the_real_target [mutable] |
Definition at line 284 of file NnlmOutputLayer.h.
Referenced by applyMuGradient(), applyMuTargetGradient(), applySigmaGradient(), applySigmaTargetGradient(), compute_approx_nl_p_t_r(), computeApproxDiscriminantGradient(), computeNonDiscriminantGradient(), and fprop().
We use a mixture with a uniform to prevent negligeable probabilities which cause gradient explosions.
Should be learned as mean of p(g|r) (probability that gaussian is responsible for observation, given r) uniform mixture coefficient
Definition at line 237 of file NnlmOutputLayer.h.
Referenced by compute_nl_p_rt(), declareOptions(), and resetParameters().
Vec PLearn::NnlmOutputLayer::vec_log_p_r_t [mutable] |
Definition at line 296 of file NnlmOutputLayer.h.
Referenced by applySigmaGradient(), applySigmaTargetGradient(), compute_nl_p_rt(), computeNonDiscriminantGradient(), makeDeepCopyFromShallowCopy(), and resetParameters().
Vec PLearn::NnlmOutputLayer::vec_log_p_rg_t [mutable] |
Definition at line 295 of file NnlmOutputLayer.h.
Referenced by addCandidateContribution(), applyMuCandidateGradient(), applyMuTargetGradient(), applySigmaCandidateGradient(), applySigmaGradient(), applySigmaTargetGradient(), compute_nl_p_rt(), computeNonDiscriminantGradient(), makeDeepCopyFromShallowCopy(), and resetParameters().
Vec PLearn::NnlmOutputLayer::vec_log_p_rt [mutable] |
Definition at line 297 of file NnlmOutputLayer.h.
Referenced by compute_approx_nl_p_t_r(), compute_nl_p_rt(), makeDeepCopyFromShallowCopy(), and resetParameters().