PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Protected Member Functions | Static Protected Member Functions | Protected Attributes | Private Types | Private Member Functions | Private Attributes
PLearn::NeuralProbabilisticLanguageModel Class Reference

Feedforward neural network for language modeling. More...

#include <NeuralProbabilisticLanguageModel.h>

Inheritance diagram for PLearn::NeuralProbabilisticLanguageModel:
Inheritance graph
[legend]
Collaboration diagram for PLearn::NeuralProbabilisticLanguageModel:
Collaboration graph
[legend]

List of all members.

Public Member Functions

 NeuralProbabilisticLanguageModel ()
virtual ~NeuralProbabilisticLanguageModel ()
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual
NeuralProbabilisticLanguageModel
deepCopy (CopiesMap &copies) const
virtual void build ()
 Finish building the object; just call inherited::build followed by build_()
virtual void forget ()
 *** SUBCLASS WRITING: ***
virtual int outputsize () const
 SUBCLASS WRITING: override this so that it returns the size of this learner's output, as a function of its inputsize(), targetsize() and set options.
virtual TVec< string > getTrainCostNames () const
 *** SUBCLASS WRITING: ***
virtual TVec< string > getTestCostNames () const
 *** SUBCLASS WRITING: ***
virtual void train ()
 *** SUBCLASS WRITING: ***
virtual void computeOutput (const Vec &input, Vec &output) const
 *** SUBCLASS WRITING: ***
virtual void computeOutputAndCosts (const Vec &input, const Vec &target, Vec &output, Vec &costs) const
 Default calls computeOutput and computeCostsFromOutputs.
virtual void computeCostsFromOutputs (const Vec &input, const Vec &output, const Vec &target, Vec &costs) const
 *** SUBCLASS WRITING: ***
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.

Static Public Member Functions

static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

Mat w1
 Weights of first hidden layer.
Mat gradient_w1
 Gradient on weights of first hidden layer.
Vec b1
 Bias of first hidden layer.
Vec gradient_b1
 Gradient on bias of first hidden layer.
Mat w2
 Weights of second hidden layer.
Mat gradient_w2
 gradient on weights of second hidden layer
Vec b2
 Bias of second hidden layer.
Vec gradient_b2
 Gradient on bias of second hidden layer.
Mat wout
 Weights of output layer.
Mat gradient_wout
 Gradient on weights of output layer.
Vec bout
 Bias of output layer.
Vec gradient_bout
 Gradient on bias of output layer.
Mat direct_wout
 Direct input to output weights.
Mat gradient_direct_wout
 Gradient on direct input to output weights.
Vec direct_bout
 Direct input to output bias (empty, since no bias is used)
Vec gradient_direct_bout
 Gradient on direct input to output bias (empty, since no bias is used)
Mat wout_dist_rep
 Weights of output layer for distributed representation predictor.
Mat gradient_wout_dist_rep
 Gradient on weights of output layer for distributed representation predictor.
Vec bout_dist_rep
 Bias of output layer for distributed representation predictor.
Vec gradient_bout_dist_rep
 Gradient on bias of output layer for distributed representation predictor.
int nhidden
 Number of hidden nunits in first hidden layer (default:0)
int nhidden2
 Number of hidden units in second hidden layer (default:0)
real weight_decay
 Weight decay (default:0)
real bias_decay
 Bias decay (default:0)
real layer1_weight_decay
 Weight decay for weights from input layer to first hidden layer (default:0)
real layer1_bias_decay
 Bias decay for weights from input layer to first hidden layer (default:0)
real layer2_weight_decay
 Weight decay for weights from first hidden layer to second hidden layer (default:0)
real layer2_bias_decay
 Bias decay for weights from first hidden layer to second hidden layer (default:0)
real output_layer_weight_decay
 Weight decay for weights from last hidden layer to output layer (default:0)
real output_layer_bias_decay
 Bias decay for weights from last hidden layer to output layer (default:0)
real direct_in_to_out_weight_decay
 Weight decay for weights from input directly to output layer (default:0)
real output_layer_dist_rep_weight_decay
 Weight decay for weights from last hidden layer to output layer of distributed representation predictor (default:0)
real output_layer_dist_rep_bias_decay
 Bias decay for weights from last hidden layer to output layer of distributed representation predictor (default:0)
real margin
 Margin requirement, used only with the margin_perceptron_cost cost function (default:1)
bool fixed_output_weights
 If true then the output weights are not learned.
bool direct_in_to_out
 If true then direct input to output weights will be added (if nhidden > 0)
string penalty_type
 Penalty to use on the weights (for weight and bias decay) (default:"L2_square")
string output_transfer_func
 Transfer function to use for ouput layer (default:"")
string hidden_transfer_func
 Transfer function to use for hidden units (default:"tanh") tanh, sigmoid, softplus, softmax, etc...
TVec< string > cost_funcs
 Cost functions.
real start_learning_rate
 Start learning rate of gradient descent.
real decrease_constant
 Decrease constant of gradietn descent.
int batch_size
 Number of samples to use to estimate gradient before an update.
bool stochastic_gradient_descent_speedup
 Indication that a trick to speedup stochastic gradient descent should be used.
string initialization_method
 Method of initialization for neural network's weights.
int dist_rep_dim
 Dimensionality (number of components) of distributed representations If <= 0, than distributed representations will not be used.
bool possible_targets_vary
 Indication that the set of possible targets vary from one input vector to another.
TVec< PP< FeatureSet > > feat_sets
 FeatureSets to apply on input.
PP< PDistributionproposal_distribution
 Proposal distribution for importance sampling speedup method (Bengio and Senecal 2006).
bool train_proposal_distribution
 Indication that the proposal distribution must be trained (using train_set).
int sampling_block_size
 Size of the sampling blocks.
int minimum_effective_sample_size
 Minimum effective sample size.

Static Public Attributes

static StaticInitializer _static_initializer_

Protected Member Functions

void fprop (const Vec &inputv, Vec &outputv, const Vec &targetv, Vec &costsv, real sampleweight=1) const
 Forward propagation in the network.
void fpropOutput (const Vec &inputv, Vec &outputv) const
 Forward propagation to compute the output.
void fpropBeforeOutputWeights (const Vec &inputv) const
 Forward propagation until output weights are reached (called by fpropOutput(...) and importance_sampling_gradient_update(...)
void fpropCostsFromOutput (const Vec &inputv, const Vec &outputv, const Vec &targetv, Vec &costsv, real sampleweight=1) const
 Forward propagation to compute the costs from the output.
void bprop (Vec &inputv, Vec &outputv, Vec &targetv, Vec &costsv, real learning_rate, real sampleweight=1)
 Backward propagation in the network, which assumes that a forward propagation has been done before.
void update ()
 Update network's parameters.
void update_affine_transform (Vec input, Mat weights, Vec bias, Mat gweights, Vec gbias, bool input_is_sparse, bool output_is_sparse, Vec output_indices)
 Update affine transformation's parameters.
void clearProppathGradient ()
 Clear network's propagation path gradient fields Assumes fprop and bprop have been called before.
virtual void initializeParams (bool set_seed=true)
 Initialize the parameters.
void add_transfer_func (const Vec &input, string transfer_func="default") const
 Computes the result of the application of the given transfer function on the input vector.
void gradient_transfer_func (Vec &output, Vec &gradient_input, Vec &gradient_output, string transfer_func="default", int nll_softmax_speed_up_target=-1)
 Computes the gradient through the given activation function, the output value and the initial gradient on that output (i.e.
void add_affine_transform (Vec input, Mat weights, Vec bias, Vec output, bool input_is_sparse, bool output_is_sparse, Vec output_indices=Vec(0)) const
 Applies affine transform on input using provided weights and bias.
void gradient_affine_transform (Vec input, Mat weights, Vec bias, Vec ginput, Mat gweights, Vec gbias, Vec goutput, bool input_is_sparse, bool output_is_sparse, real learning_rate, real weight_decay, real bias_decay, Vec output_indices=Vec(0))
 Propagate gradient through affine transform on input using provided weights and bias.
void gradient_penalty (Vec input, Mat weights, Vec bias, Mat gweights, Vec gbias, bool input_is_sparse, bool output_is_sparse, real learning_rate, real weight_decay, real bias_decay, Vec output_indices=Vec(0))
 Propagate penalty gradient through weights and bias, scaled by -learning rate.
void importance_sampling_gradient_update (Vec &inputv, Vec &targetv, real learning_rate, int n_samples, real train_sample_weight=1)
 Update the neural network parameters using the importance sampling estimate of the gradient, based on n_samples of the proposal distribution.
void getNegativeEnergyValues (Vec samples, Vec neg_energies)
 Gives scalar negative energy values for some samples (words).
void fillWeights (const Mat &weights)
 Fill a matrix of weights according to the 'initialization_method' specified.
void verify_gradient (Vec &input, Vec target, real step)
 Verify gradient of propagation path.
void verify_gradient_affine_transform (Vec global_input, Vec &global_output, Vec &global_targetv, Vec &global_costs, real sampleweight, Vec input, Mat weights, Vec bias, Mat est_gweights, Vec est_gbias, bool input_is_sparse, bool output_is_sparse, real step, Vec output_indices=Vec(0)) const
 Verify gradient of affine_transform parameters.
void output_gradient_verification (Vec grad, Vec est_grad)
void batchComputeOutputAndConfidence (VMat inputs, real probability, VMat outputs_and_confidence) const
 Changes the reference_set and then calls the parent's class method.
virtual void use (VMat testset, VMat outputs) const
 Changes the reference_set and then calls the parent's class method.
virtual void test (VMat testset, PP< VecStatsCollector > test_stats, VMat testoutputs=0, VMat testcosts=0) const
 Changes the reference_set and then calls the parent's class method.
virtual VMat processDataSet (VMat dataset) const
 Changes the reference_set and then calls the parent's class method.

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares this class' options.

Protected Attributes

int total_output_size
 Total output size.
int total_updates
 Total updates so far;.
int n_feat_sets
 Number of feature sets.
int total_feats_per_token
 Number of features per input token for which a distributed representation is computed.
int reind_target
 Reindexed target.
Vec feat_input
 Feature input;.
Vec gradient_feat_input
 Gradient on feature input (useless for now)
Vec nnet_input
 Input vector to NNet (after mapping into distributed representations)
Vec gradient_nnet_input
 Gradient for vector to NNet.
Vec hiddenv
 First hidden layer value.
Vec gradient_hiddenv
 Gradient of first hidden layer.
Vec gradient_act_hiddenv
 Gradient through first hidden layer activation.
Vec hidden2v
 Second hidden layer value.
Vec gradient_hidden2v
 Gradient of second hidden layer.
Vec gradient_act_hidden2v
 Gradient through second hidden layer activation.
Vec gradient_outputv
 Gradient on output.
Vec gradient_act_outputv
 Gradient throught output layer activation.
PP< PRandomrgen
 Random number generator for parameters initialization.
Vec feats_since_last_update
 Features seen in input since last update.
Vec target_values_since_last_update
 Possible target values seen since last update.
VMat val_string_reference_set
 VMatrix used to get values to string mapping for input tokens.
VMat target_values_reference_set
 Possible target values mapping.
Vec importance_sampling_ratios
 Importance sampling ratios of the samples.
Vec sample
 Generated sample from proposal distribution.
Vec generated_samples
 Set of generated samples from the proposal distribution.

Private Types

typedef PLearner inherited

Private Member Functions

void build_ ()
 **** SUBCLASS WRITING: ****
void compute_softmax (const Vec &x, const Vec &y) const
 Softmax vector y obtained on x This implementation is such that compute_softmax(x,x) is such that x becomes its softmax value.
real nll (const Vec &outputv, int target) const
 Negative log-likelihood loss.
real classification_loss (const Vec &outputv, int target) const
 Classification loss.
int my_argmax (const Vec &vec, int default_compare=0) const
 Argmax function that lets you define the default (first) component used for comparisons.

Private Attributes

Vec target_values
 Vector of possible target values.
Vec output_comp
 Vector for output computations.
Vec row
 Row vector.
Vec last_layer
 Last layer of network (pointer to either nnet_input, vnhidden or vnhidden2)
Vec gradient_last_layer
 Gradient of last layer in back propagation.
TVec< TVec< int > > feats
 Features for each token.
Vec gradient
 Temporary computations variable, used in fprop() and bprop() Care must be taken when using these variables, since they are used by many different functions.
Vec neg_energies
Vec densities
string str
realpval1
realpval2
realpval3
realpval4
realpval5
real val
real val2
real grad
int offset
int ni
int nj
int nk
int id
int nfeats
int ifeats
intf

Detailed Description

Feedforward neural network for language modeling.

Implementation of the Neural Probabilistic Language Model proposed by Bengio, Ducharme, Vincent and Jauvin (JMLR 2003), with extentensions to speedup the model (Bengio and Sénécal, AISTATS 2003) and to include prior information about the distributed representation and permit generalization of these distributed representations to out-of-vocabulary words using features (Larochelle and Bengio, Tech Report 2006).

Definition at line 58 of file NeuralProbabilisticLanguageModel.h.


Member Typedef Documentation

Reimplemented from PLearn::PLearner.

Definition at line 63 of file NeuralProbabilisticLanguageModel.h.


Constructor & Destructor Documentation

PLearn::NeuralProbabilisticLanguageModel::NeuralProbabilisticLanguageModel ( )
PLearn::NeuralProbabilisticLanguageModel::~NeuralProbabilisticLanguageModel ( ) [virtual]

Definition at line 96 of file NeuralProbabilisticLanguageModel.cc.

{
}

Member Function Documentation

string PLearn::NeuralProbabilisticLanguageModel::_classname_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 60 of file NeuralProbabilisticLanguageModel.cc.

OptionList & PLearn::NeuralProbabilisticLanguageModel::_getOptionList_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 60 of file NeuralProbabilisticLanguageModel.cc.

RemoteMethodMap & PLearn::NeuralProbabilisticLanguageModel::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 60 of file NeuralProbabilisticLanguageModel.cc.

bool PLearn::NeuralProbabilisticLanguageModel::_isa_ ( const Object o) [static]

Reimplemented from PLearn::PLearner.

Definition at line 60 of file NeuralProbabilisticLanguageModel.cc.

Object * PLearn::NeuralProbabilisticLanguageModel::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::Object.

Definition at line 60 of file NeuralProbabilisticLanguageModel.cc.

StaticInitializer NeuralProbabilisticLanguageModel::_static_initializer_ & PLearn::NeuralProbabilisticLanguageModel::_static_initialize_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 60 of file NeuralProbabilisticLanguageModel.cc.

void PLearn::NeuralProbabilisticLanguageModel::add_affine_transform ( Vec  input,
Mat  weights,
Vec  bias,
Vec  output,
bool  input_is_sparse,
bool  output_is_sparse,
Vec  output_indices = Vec(0) 
) const [protected]

Applies affine transform on input using provided weights and bias.

Information about the nature of the input and output need to be provided. If bias.length() == 0, then output initial value is used as the bias.

Definition at line 1202 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::TVec< T >::data(), i, j, PLearn::TVec< T >::length(), ni, nj, pval1, pval2, pval3, and PLearn::transposeProductAcc().

Referenced by fpropBeforeOutputWeights(), fpropOutput(), and getNegativeEnergyValues().

{
    // Bias
    if(bias.length() != 0)
    {
        if(output_is_sparse)
        {
            pval1 = output.data();
            pval2 = bias.data();
            pval3 = output_indices.data();
            ni = output.length();
            for(int i=0; i<ni; i++)
                *pval1++ = pval2[(int)*pval3++];
        }
        else
        {
            pval1 = output.data();
            pval2 = bias.data();
            ni = output.length();
            for(int i=0; i<ni; i++)
                *pval1++ = *pval2++;
        }
    }

    // Weights
    if(!input_is_sparse && !output_is_sparse)
    {
        transposeProductAcc(output,weights,input);
    }
    else if(!input_is_sparse && output_is_sparse)
    {
        ni = output.length();
        nj = input.length();
        pval1 = output.data();
        pval3 = output_indices.data();
        for(int i=0; i<ni; i++)
        {
            pval2 = input.data();
            for(int j=0; j<nj; j++)
                *pval1 += (*pval2++)*weights(j,(int)*pval3);
            pval1++;
            pval3++;
        }
    }
    else if(input_is_sparse && !output_is_sparse)
    {
        ni = input.length();
        nj = output.length();
        if(ni != 0)
        {
            pval3 = input.data();
            for(int i=0; i<ni; i++)
            {
                pval1 = output.data();
                pval2 = weights[(int)(*pval3++)];
                for(int j=0; j<nj;j++)
                    *pval1++ += *pval2++;
            }
        }
    }
    else if(input_is_sparse && output_is_sparse)
    {
        // Weights
        ni = input.length();
        nj = output.length();
        if(ni != 0)
        {
            pval2 = input.data();
            for(int i=0; i<ni; i++)
            {
                pval1 = output.data();
                pval3 = output_indices.data();
                for(int j=0; j<nj; j++)
                    *pval1++ += weights((int)(*pval2),(int)*pval3++);
                pval2++;
            }
        }
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::add_transfer_func ( const Vec input,
string  transfer_func = "default" 
) const [protected]

Computes the result of the application of the given transfer function on the input vector.

Definition at line 1083 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::compute_sigmoid(), compute_softmax(), PLearn::compute_tanh(), hidden_transfer_func, and PLERROR.

Referenced by fpropBeforeOutputWeights(), and fpropOutput().

{
    if (transfer_func == "default")
        transfer_func = hidden_transfer_func;
    if(transfer_func=="linear")
        return;
    else if(transfer_func=="tanh")
    {
        compute_tanh(input,input);
        return;
    }        
    else if(transfer_func=="sigmoid")
    {
        compute_sigmoid(input,input);
        return;
    }
    else if(transfer_func=="softmax")
    {
        compute_softmax(input,input);
        return;
    }
    else PLERROR("In NeuralProbabilisticLanguageModel::add_transfer_func(): "
                 "Unknown value for transfer_func: %s",transfer_func.c_str());
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::batchComputeOutputAndConfidence ( VMat  inputs,
real  probability,
VMat  outputs_and_confidence 
) const [protected, virtual]

Changes the reference_set and then calls the parent's class method.

Reimplemented from PLearn::PLearner.

Definition at line 3082 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::PLearner::batchComputeOutputAndConfidence(), PLearn::PLearner::train_set, and val_string_reference_set.

{
    val_string_reference_set = inputs;
    inherited::batchComputeOutputAndConfidence(inputs,
                                               probability,
                                               outputs_and_confidence);
    val_string_reference_set = train_set;
}

Here is the call graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::bprop ( Vec inputv,
Vec outputv,
Vec targetv,
Vec costsv,
real  learning_rate,
real  sampleweight = 1 
) [protected]

Backward propagation in the network, which assumes that a forward propagation has been done before.

A learning rate needs to be provided because it is -learning_rate * gradient that is propagated, not just the gradient.

Definition at line 624 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::TVec< T >::append(), b1, b2, bias_decay, bout, bout_dist_rep, clearProppathGradient(), cost_funcs, direct_bout, direct_in_to_out, direct_in_to_out_weight_decay, direct_wout, dist_rep_dim, feat_input, feats, feats_since_last_update, gradient_act_hidden2v, gradient_act_hiddenv, gradient_act_outputv, gradient_affine_transform(), gradient_b1, gradient_b2, gradient_bout, gradient_bout_dist_rep, gradient_direct_bout, gradient_direct_wout, gradient_feat_input, gradient_hidden2v, gradient_hiddenv, gradient_last_layer, gradient_nnet_input, gradient_outputv, gradient_transfer_func(), gradient_w1, gradient_w2, gradient_wout, gradient_wout_dist_rep, hidden2v, hiddenv, i, ifeats, PLearn::PLearner::inputsize_, j, layer1_bias_decay, layer1_weight_decay, layer2_bias_decay, layer2_weight_decay, PLearn::TVec< T >::length(), n_feat_sets, nfeats, nhidden, nhidden2, nnet_input, output_layer_bias_decay, output_layer_dist_rep_bias_decay, output_layer_dist_rep_weight_decay, output_layer_weight_decay, output_transfer_func, PLERROR, possible_targets_vary, reind_target, PLearn::TVec< T >::resize(), stochastic_gradient_descent_speedup, PLearn::TVec< T >::subVec(), target_values, target_values_since_last_update, w1, w2, weight_decay, wout, and wout_dist_rep.

Referenced by train(), and verify_gradient().

{
    if(possible_targets_vary) 
    {
        gradient_outputv.resize(target_values.length());
        gradient_act_outputv.resize(target_values.length());
        if(!stochastic_gradient_descent_speedup)
            target_values_since_last_update.append(target_values);
    }

    if(!stochastic_gradient_descent_speedup)
        feats_since_last_update.append(feat_input);

    // Gradient through cost
    if(cost_funcs[0]=="NLL") 
    {
        // Permits to avoid numerical precision errors
        if(output_transfer_func == "softmax")
            gradient_outputv[reind_target] = learning_rate*sampleweight;
        else
            gradient_outputv[reind_target] = learning_rate*sampleweight/(outputv[reind_target]);            
    }
    else if(cost_funcs[0]=="class_error")
    {
        PLERROR("NeuralProbabilisticLanguageModel::bprop(): gradient "
                "cannot be computed for \"class_error\" cost");
    }

    // Gradient through output transfer function
    if(output_transfer_func != "linear")
    {
        if(cost_funcs[0]=="NLL" && output_transfer_func == "softmax")
            gradient_transfer_func(outputv,gradient_act_outputv, gradient_outputv,
                                    output_transfer_func, reind_target);
        else
            gradient_transfer_func(outputv,gradient_act_outputv, gradient_outputv,
                                    output_transfer_func);
        gradient_last_layer = gradient_act_outputv;
    }
    else
        gradient_last_layer = gradient_act_outputv;
    
    // Gradient through output affine transform


    if(nhidden2 > 0) {
        gradient_affine_transform(hidden2v, wout, bout, gradient_hidden2v, 
                                  gradient_wout, gradient_bout, 
                                  gradient_last_layer,
                                  false, possible_targets_vary, 
                                  learning_rate*sampleweight, 
                                  weight_decay+output_layer_weight_decay,
                                  bias_decay+output_layer_bias_decay,
                                  target_values);
    }
    else if(nhidden > 0) 
    {
        gradient_affine_transform(hiddenv, wout, bout, gradient_hiddenv,
                                  gradient_wout, gradient_bout, 
                                  gradient_last_layer,
                                  false, possible_targets_vary, 
                                  learning_rate*sampleweight, 
                                  weight_decay+output_layer_weight_decay,
                                  bias_decay+output_layer_bias_decay, 
                                  target_values);
    }
    else
    {
        gradient_affine_transform(nnet_input, wout, bout, gradient_nnet_input, 
                                  gradient_wout, gradient_bout, 
                                  gradient_last_layer,
                                  (dist_rep_dim <= 0), possible_targets_vary, 
                                  learning_rate*sampleweight, 
                                  weight_decay+output_layer_weight_decay,
                                  bias_decay+output_layer_bias_decay, 
                                  target_values);
    }


    if(nhidden>0 && direct_in_to_out)
    {
        gradient_affine_transform(nnet_input, direct_wout, direct_bout,
                                  gradient_nnet_input, 
                                  gradient_direct_wout, gradient_direct_bout,
                                  gradient_last_layer,
                                  dist_rep_dim<=0, possible_targets_vary,
                                  learning_rate*sampleweight, 
                                  weight_decay+direct_in_to_out_weight_decay,
                                  0,
                                  target_values);
    }


    if(nhidden2 > 0)
    {
        gradient_transfer_func(hidden2v,gradient_act_hidden2v,gradient_hidden2v);
        gradient_affine_transform(hiddenv, w2, b2, gradient_hiddenv, 
                                  gradient_w2, gradient_b2, gradient_act_hidden2v,
                                  false, false,learning_rate*sampleweight, 
                                  weight_decay+layer2_weight_decay,
                                  bias_decay+layer2_bias_decay);
    }
    if(nhidden > 0)
    {
        gradient_transfer_func(hiddenv,gradient_act_hiddenv,gradient_hiddenv);  
        gradient_affine_transform(nnet_input, w1, b1, gradient_nnet_input, 
                                  gradient_w1, gradient_b1, gradient_act_hiddenv,
                                  dist_rep_dim<=0, false,learning_rate*sampleweight, 
                                  weight_decay+layer1_weight_decay,
                                  bias_decay+layer1_bias_decay);
    }

    if(dist_rep_dim > 0)
    {
        nfeats = 0;
        id = 0;
        for(int i=0; i<inputsize_; )
        {
            ifeats = 0;
            for(int j=0; j<n_feat_sets; j++,i++)
                ifeats += feats[i].length();
            gradient_affine_transform(feat_input.subVec(nfeats,ifeats),
                                      wout_dist_rep, bout_dist_rep,
                                      //gradient_feat_input.subVec(nfeats,feats[i].length()),
                                      gradient_feat_input,// Useless anyways...
                                      gradient_wout_dist_rep,
                                      gradient_bout_dist_rep,
                                      gradient_nnet_input.subVec(
                                          id*dist_rep_dim,dist_rep_dim),
                                      true, false, learning_rate*sampleweight, 
                                      weight_decay+
                                      output_layer_dist_rep_weight_decay,
                                      bias_decay+output_layer_dist_rep_bias_decay);
            nfeats += ifeats;
            id++;
        }
    }

    clearProppathGradient();
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::build ( ) [virtual]

Finish building the object; just call inherited::build followed by build_()

Reimplemented from PLearn::PLearner.

Definition at line 357 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::PLearner::build(), and build_().

Referenced by forget().

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::build_ ( ) [private]

**** SUBCLASS WRITING: ****

This method should finish building of the object, according to set 'options', in *any* situation.

Typical situations include:

  • Initial building of an object from a few user-specified options
  • Building of a "reloaded" object: i.e. from the complete set of all serialised options.
  • Updating or "re-building" of an object after a few "tuning" options (such as hyper-parameters) have been modified.

You can assume that the parent class' build_() has already been called.

A typical build method will want to know the inputsize(), targetsize() and outputsize(), and may also want to check whether train_set->hasWeights(). All these methods require a train_set to be set, so the first thing you may want to do, is check if(train_set), before doing any heavy building...

Note: build() is always called by setTrainingSet.

Reimplemented from PLearn::PLearner.

Definition at line 367 of file NeuralProbabilisticLanguageModel.cc.

References batch_size, cost_funcs, feat_sets, feats, PLearn::TVec< T >::fill(), fixed_output_weights, i, initializeParams(), PLearn::PLearner::inputsize_, PLearn::TVec< T >::length(), PLearn::lowerstring(), MISSING_VALUE, n_feat_sets, output_comp, penalty_type, PLERROR, PLWARNING, proposal_distribution, PLearn::TVec< T >::resize(), row, sample, PLearn::TVec< T >::size(), PLearn::PLearner::stage, stochastic_gradient_descent_speedup, target_values_reference_set, PLearn::PLearner::targetsize_, total_output_size, train_proposal_distribution, PLearn::PLearner::train_set, val_string_reference_set, PLearn::PLearner::weightsize_, and PLearn::VMat::width().

Referenced by build().

{
    // Don't do anything if we don't have a train_set
    // It's the only one who knows the inputsize, targetsize and weightsize

    if(inputsize_>=0 && targetsize_>=0 && weightsize_>=0)
    {
        if(targetsize_ != 1)
            PLERROR("In NeuralProbabilisticLanguageModel::build_(): "
                    "targetsize_ must be 1, not %d",targetsize_);

        n_feat_sets = feat_sets.length();

        if(n_feat_sets == 0)
            PLERROR("In NeuralProbabilisticLanguageModel::build_(): "
                    "at least one FeatureSet must be provided\n");
        
        if(inputsize_ % n_feat_sets != 0)
            PLERROR("In NeuralProbabilisticLanguageModel::build_(): "
                    "feat_sets.length() must be a divisor of inputsize()");
        
        // Process penalty type option
        string pt = lowerstring( penalty_type );
        if( pt == "l1" )
            penalty_type = "L1";
        else if( pt == "l2_square" || pt == "l2 square" || pt == "l2square" )
            penalty_type = "L2_square";
        else if( pt == "l2" )
        {
            PLWARNING("In NeuralProbabilisticLanguageModel::build_(): "
                      "L2 penalty not supported, assuming you want L2 square");
            penalty_type = "L2_square";
        }
        else
            PLERROR("In NeuralProbabilisticLanguageModel::build_(): "
                    "penalty_type \"%s\" not supported", penalty_type.c_str());
        
        int ncosts = cost_funcs.size();  
        if(ncosts<=0)
            PLERROR("In NeuralProbabilisticLanguageModel::build_(): "
                    "Empty cost_funcs : must at least specify the cost "
                    "function to optimize!");
        
        if(stage <= 0 ) // Training hasn't started
        {
            // Initialize parameters
            initializeParams();                        
        }
        
        output_comp.resize(total_output_size);
        row.resize(train_set->width());
        row.fill(MISSING_VALUE);
        feats.resize(inputsize_);
        // Making sure that all feats[i] have non null storage...
        for(int i=0; i<feats.length(); i++)
        {
            feats[i].resize(1);
            feats[i].resize(0);
        }
        if(fixed_output_weights && stochastic_gradient_descent_speedup)
            PLERROR("In NeuralProbabilisticLanguageModel::build_(): "
                    "cannot use stochastic gradient descent speedup with "
                    "fixed output weights");
        val_string_reference_set = train_set;
        target_values_reference_set = train_set;

        if(proposal_distribution)
        {
            if(batch_size != 1)
                PLERROR("In NeuralProbabilisticLanguageModel::build_(): "
                        "importance sampling speedup is not implemented for"
                        "batch size != 1");
            sample.resize(1);            
            if(train_proposal_distribution)
            {
                proposal_distribution->setTrainingSet(train_set);
                proposal_distribution->train();
            }
        }
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

real PLearn::NeuralProbabilisticLanguageModel::classification_loss ( const Vec outputv,
int  target 
) const [private]

Classification loss.

Definition at line 2174 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::argmax().

Referenced by fpropCostsFromOutput().

{
    return (argmax(outputv) == target ? 0 : 1);
}

Here is the call graph for this function:

Here is the caller graph for this function:

string PLearn::NeuralProbabilisticLanguageModel::classname ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 60 of file NeuralProbabilisticLanguageModel.cc.

Referenced by train().

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::clearProppathGradient ( ) [protected]

Clear network's propagation path gradient fields Assumes fprop and bprop have been called before.

Clear network's gradient fields.

Definition at line 937 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::TVec< T >::clear(), cost_funcs, dist_rep_dim, gradient_act_hidden2v, gradient_act_hiddenv, gradient_act_outputv, gradient_hidden2v, gradient_hiddenv, gradient_nnet_input, gradient_outputv, nhidden, nhidden2, and reind_target.

Referenced by bprop(), importance_sampling_gradient_update(), and verify_gradient().

{
    // Trick to make clearProppathGradient faster...
    if(cost_funcs[0]=="NLL") 
        gradient_outputv[reind_target] = 0;
    else
        gradient_outputv.clear();
    gradient_act_outputv.clear();
    
    if(dist_rep_dim>0)
        gradient_nnet_input.clear();

    if(nhidden>0) 
    {
        gradient_hiddenv.clear();
        gradient_act_hiddenv.clear();
        if(nhidden2>0) 
        {
            gradient_hidden2v.clear();
            gradient_act_hidden2v.clear();
        }
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::compute_softmax ( const Vec x,
const Vec y 
) const [private]

Softmax vector y obtained on x This implementation is such that compute_softmax(x,x) is such that x becomes its softmax value.

Definition at line 2140 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::TVec< T >::data(), i, PLearn::TVec< T >::length(), PLearn::max(), n, PLERROR, and PLearn::safeexp().

Referenced by add_transfer_func().

{
    int n = x.length();
    
//    real* yp = y.data();
//    real* xp = x.data();
//    for(int i=0; i<n; i++)
//    {
//        *yp++ = *xp > 1e-5 ? *xp : 1e-5;
//        xp++;
//    }

    if (n>0)
    {
        real* yp = y.data();
        real* xp = x.data();
        real maxx = max(x);
        real s = 0;
        for (int i=0;i<n;i++)
            s += (*yp++ = safeexp(*xp++-maxx));
        if (s == 0) PLERROR("trying to divide by 0 in softmax");
        s = 1.0 / s;
        yp = y.data();
        for (int i=0;i<n;i++)
            *yp++ *= s;
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::computeCostsFromOutputs ( const Vec input,
const Vec output,
const Vec target,
Vec costs 
) const [virtual]

*** SUBCLASS WRITING: ***

This should be defined in subclasses to compute the weighted costs from already computed output. The costs should correspond to the cost names returned by getTestCostNames().

NOTE: In exotic cases, the cost may also depend on some info in the input, that's why the method also gets so see it.

Implements PLearn::PLearner.

Definition at line 965 of file NeuralProbabilisticLanguageModel.cc.

References PLERROR.

{
    PLERROR("In NeuralProbabilisticLanguageModel::computeCostsFromOutputs():"
            "output is not enough to compute costs");
}
void PLearn::NeuralProbabilisticLanguageModel::computeOutput ( const Vec input,
Vec output 
) const [virtual]

*** SUBCLASS WRITING: ***

This should be defined in subclasses to compute the output from the input.

Reimplemented from PLearn::PLearner.

Definition at line 996 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::argmax(), fpropOutput(), PLearn::TVec< T >::length(), my_argmax(), output_comp, possible_targets_vary, rgen, and target_values.

{
    fpropOutput(inputv, output_comp);
    if(possible_targets_vary)
    {
        //row.subVec(0,inputsize_) << inputv;
        //target_values_reference_set->getValues(row,inputsize_,target_values);
        outputv[0] = target_values[
            my_argmax(output_comp,rgen->uniform_multinomial_sample(
                          output_comp.length()))];
    }
    else
        outputv[0] = argmax(output_comp);
}

Here is the call graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::computeOutputAndCosts ( const Vec input,
const Vec target,
Vec output,
Vec costs 
) const [virtual]

Default calls computeOutput and computeCostsFromOutputs.

You may override this if you have a more efficient way to compute both output and weighted costs at the same time.

Reimplemented from PLearn::PLearner.

Definition at line 1010 of file Learner.cc.

References PLearn::Learner::computeCostsFromOutputs(), and PLearn::Learner::computeOutput().

{
    computeOutput(input, output);
    computeCostsFromOutputs(input, output, target, weight, costs);
}

Here is the call graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::declareOptions ( OptionList ol) [static, protected]

Declares this class' options.

Reimplemented from PLearn::PLearner.

Definition at line 100 of file NeuralProbabilisticLanguageModel.cc.

References b1, b2, batch_size, bias_decay, bout, bout_dist_rep, PLearn::OptionBase::buildoption, cost_funcs, PLearn::declareOption(), PLearn::PLearner::declareOptions(), decrease_constant, direct_bout, direct_in_to_out, direct_in_to_out_weight_decay, direct_wout, dist_rep_dim, feat_sets, fixed_output_weights, hidden_transfer_func, initialization_method, layer1_bias_decay, layer1_weight_decay, layer2_bias_decay, layer2_weight_decay, PLearn::OptionBase::learntoption, minimum_effective_sample_size, nhidden, nhidden2, output_layer_bias_decay, output_layer_dist_rep_bias_decay, output_layer_dist_rep_weight_decay, output_layer_weight_decay, output_transfer_func, penalty_type, possible_targets_vary, sampling_block_size, start_learning_rate, stochastic_gradient_descent_speedup, train_proposal_distribution, PLearn::PLearner::train_set, w1, w2, weight_decay, wout, and wout_dist_rep.

{
    declareOption(ol, "nhidden", &NeuralProbabilisticLanguageModel::nhidden, 
                  OptionBase::buildoption, 
                  "Number of hidden units in first hidden layer (0 means no "
                  "hidden layer).\n");
    
    declareOption(ol, "nhidden2", &NeuralProbabilisticLanguageModel::nhidden2, 
                  OptionBase::buildoption, 
                  "Number of hidden units in second hidden layer (0 means no "
                  "hidden layer).\n");
    
    declareOption(ol, "weight_decay", 
                  &NeuralProbabilisticLanguageModel::weight_decay, 
                  OptionBase::buildoption, 
                  "Global weight decay for all layers.\n");
    
    declareOption(ol, "bias_decay", &NeuralProbabilisticLanguageModel::bias_decay,
                  OptionBase::buildoption, 
                  "Global bias decay for all layers.\n");
    
    declareOption(ol, "layer1_weight_decay", 
                  &NeuralProbabilisticLanguageModel::layer1_weight_decay, 
                  OptionBase::buildoption, 
                  "Additional weight decay for the first hidden layer. "
                  "Is added to weight_decay.\n");
    
    declareOption(ol, "layer1_bias_decay", 
                  &NeuralProbabilisticLanguageModel::layer1_bias_decay, 
                  OptionBase::buildoption, 
                  "Additional bias decay for the first hidden layer. "
                  "Is added to bias_decay.\n");
    
    declareOption(ol, "layer2_weight_decay", 
                  &NeuralProbabilisticLanguageModel::layer2_weight_decay, 
                  OptionBase::buildoption, 
                  "Additional weight decay for the second hidden layer. "
                  "Is added to weight_decay.\n");
    
    declareOption(ol, "layer2_bias_decay", 
                  &NeuralProbabilisticLanguageModel::layer2_bias_decay, 
                  OptionBase::buildoption, 
                  "Additional bias decay for the second hidden layer. "
                  "Is added to bias_decay.\n");
    
    declareOption(ol, "output_layer_weight_decay", 
                  &NeuralProbabilisticLanguageModel::output_layer_weight_decay, 
                  OptionBase::buildoption, 
                  "Additional weight decay for the output layer. "
                  "Is added to 'weight_decay'.\n");
    
    declareOption(ol, "output_layer_bias_decay", 
                  &NeuralProbabilisticLanguageModel::output_layer_bias_decay, 
                  OptionBase::buildoption, 
                  "Additional bias decay for the output layer. "
                  "Is added to 'bias_decay'.\n");
    
    declareOption(ol, "direct_in_to_out_weight_decay", 
                  &NeuralProbabilisticLanguageModel::direct_in_to_out_weight_decay,
                  OptionBase::buildoption,
                  "Additional weight decay for the weights going from the "
                  "input directly to the \n output layer.  Is added to "
                  "'weight_decay'.\n");
    
    declareOption(ol, "output_layer_dist_rep_weight_decay", 
                  &NeuralProbabilisticLanguageModel::output_layer_dist_rep_weight_decay, 
                  OptionBase::buildoption, 
                  "Additional weight decay for the output layer of distributed"
                  "representation\n"
                  "predictor.  Is added to 'weight_decay'.\n");
    
    declareOption(ol, "output_layer_dist_rep_bias_decay", 
                  &NeuralProbabilisticLanguageModel::output_layer_dist_rep_bias_decay, 
                  OptionBase::buildoption, 
                  "Additional bias decay for the output layer of distributed"
                  "representation\n"
                  "predictor.  Is added to 'bias_decay'.\n");
    
    declareOption(ol, "fixed_output_weights", 
                  &NeuralProbabilisticLanguageModel::fixed_output_weights, 
                  OptionBase::buildoption, 
                  "If true then the output weights are not learned. They are"
                  "initialized to +1 or -1 randomly.\n");
    
    declareOption(ol, "direct_in_to_out", 
                  &NeuralProbabilisticLanguageModel::direct_in_to_out, 
                  OptionBase::buildoption, 
                  "If true then direct input to output weights will be added "
                  "(if nhidden > 0).\n");
    
    declareOption(ol, "penalty_type", 
                  &NeuralProbabilisticLanguageModel::penalty_type,
                  OptionBase::buildoption,
                  "Penalty to use on the weights (for weight and bias decay).\n"
                  "Can be any of:\n"
                  "  - \"L1\": L1 norm,\n"
                  "  - \"L2_square\" (default): square of the L2 norm.\n");
    
    declareOption(ol, "output_transfer_func", 
                  &NeuralProbabilisticLanguageModel::output_transfer_func, 
                  OptionBase::buildoption, 
                  "what transfer function to use for ouput layer? One of: \n"
                  "  - \"tanh\" \n"
                  "  - \"sigmoid\" \n"
                  "  - \"softmax\" \n"
                  "An empty string or \"none\" means no output transfer function \n");
    
    declareOption(ol, "hidden_transfer_func", 
                  &NeuralProbabilisticLanguageModel::hidden_transfer_func, 
                  OptionBase::buildoption, 
                  "What transfer function to use for hidden units? One of \n"
                  "  - \"linear\" \n"
                  "  - \"tanh\" \n"
                  "  - \"sigmoid\" \n"
                  "  - \"softmax\" \n");
    
    declareOption(ol, "cost_funcs", &NeuralProbabilisticLanguageModel::cost_funcs, 
                  OptionBase::buildoption, 
                  "A list of cost functions to use\n"
                  "in the form \"[ cf1; cf2; cf3; ... ]\" where each function "
                  "is one of: \n"
                  "  - \"NLL\" (negative log likelihood -log(p[c]) for "
                  "classification) \n"
                  "  - \"class_error\" (classification error) \n"
                  "The FIRST function of the list will be used as \n"
                  "the objective function to optimize \n"
                  "(possibly with an added weight decay penalty) \n");
    
    declareOption(ol, "start_learning_rate", 
                  &NeuralProbabilisticLanguageModel::start_learning_rate, 
                  OptionBase::buildoption, 
                  "Start learning rate of gradient descent.\n");
                  
    declareOption(ol, "decrease_constant", 
                  &NeuralProbabilisticLanguageModel::decrease_constant, 
                  OptionBase::buildoption, 
                  "Decrease constant of gradient descent.\n");

    declareOption(ol, "batch_size", 
                  &NeuralProbabilisticLanguageModel::batch_size, 
                  OptionBase::buildoption, 
                  "How many samples to use to estimate the avergage gradient before updating the weights\n"
                  "0 is equivalent to specifying training_set->length() \n");

    declareOption(ol, "stochastic_gradient_descent_speedup", 
                  &NeuralProbabilisticLanguageModel::stochastic_gradient_descent_speedup, 
                  OptionBase::buildoption, 
                  "Indication that a trick to speedup stochastic "
                  "gradient descent\n"
                  "should be used.\n");

    declareOption(ol, "initialization_method", 
                  &NeuralProbabilisticLanguageModel::initialization_method, 
                  OptionBase::buildoption, 
                  "The method used to initialize the weights:\n"
                  " - \"normal_linear\"  = a normal law with variance "
                  "1/n_inputs\n"
                  " - \"normal_sqrt\"    = a normal law with variance "
                  "1/sqrt(n_inputs)\n"
                  " - \"uniform_linear\" = a uniform law in [-1/n_inputs,"
                  "1/n_inputs]\n"
                  " - \"uniform_sqrt\"   = a uniform law in [-1/sqrt(n_inputs),"
                  "1/sqrt(n_inputs)]\n"
                  " - \"zero\"           = all weights are set to 0\n");
    
    declareOption(ol, "dist_rep_dim", 
                  &NeuralProbabilisticLanguageModel::dist_rep_dim, 
                  OptionBase::buildoption, 
                  " Dimensionality (number of components) of distributed "
                  "representations.\n"
                  "If <= 0, than distributed representations will not be used.\n"
        );
    
    declareOption(ol, "possible_targets_vary", 
                  &NeuralProbabilisticLanguageModel::possible_targets_vary, 
                  OptionBase::buildoption, 
                  "Indication that the set of possible targets vary from\n"
                  "one input vector to another.\n"
        );
    
    declareOption(ol, "feat_sets", &NeuralProbabilisticLanguageModel::feat_sets, 
                                OptionBase::buildoption, 
                  "FeatureSets to apply on input. The number of feature\n"
                  "sets should be a divisor of inputsize(). The feature\n"
                  "sets applied to the ith input field is the feature\n"
                  "set at position i % feat_sets.length().\n"
        );

    declareOption(ol, "train_proposal_distribution", 
                  &NeuralProbabilisticLanguageModel::train_proposal_distribution
                  OptionBase::buildoption, 
                  "Indication that the proposal distribution must be trained\n"
                  "(using train_set).\n"
        );

    declareOption(ol, "sampling_block_size", 
                  &NeuralProbabilisticLanguageModel::sampling_block_size, 
                  OptionBase::buildoption, 
                  "Size of the sampling blocks.\n"
        );

    declareOption(ol, "minimum_effective_sample_size", 
                  &NeuralProbabilisticLanguageModel::minimum_effective_sample_size, 
                  OptionBase::buildoption, 
                  "Minimum effective sample size.\n"
        );

    declareOption(ol, "train_set", &NeuralProbabilisticLanguageModel::train_set, 
                  OptionBase::learntoption, 
                  "VMatrix used for training, that also provides information about the data (e.g. Dictionary objects for the different fields).\n");


                  // Networks' learnt parameters
    declareOption(ol, "w1", &NeuralProbabilisticLanguageModel::w1, 
                  OptionBase::learntoption, 
                  "Weights of first hidden layer.\n");
    declareOption(ol, "b1", &NeuralProbabilisticLanguageModel::b1, 
                  OptionBase::learntoption, 
                  "Bias of first hidden layer.\n");
    declareOption(ol, "w2", &NeuralProbabilisticLanguageModel::w2, 
                  OptionBase::learntoption, 
                  "Weights of second hidden layer.\n");
    declareOption(ol, "b2", &NeuralProbabilisticLanguageModel::b2, 
                  OptionBase::learntoption, 
                  "Bias of second hidden layer.\n");
    declareOption(ol, "wout", &NeuralProbabilisticLanguageModel::wout, 
                  OptionBase::learntoption, 
                  "Weights of output layer.\n");
    declareOption(ol, "bout", &NeuralProbabilisticLanguageModel::bout, 
                  OptionBase::learntoption, 
                  "Bias of output layer.\n");
    declareOption(ol, "direct_wout", 
                  &NeuralProbabilisticLanguageModel::direct_wout, 
                  OptionBase::learntoption, 
                  "Direct input to output weights.\n");
    declareOption(ol, "direct_bout", 
                  &NeuralProbabilisticLanguageModel::direct_bout, 
                  OptionBase::learntoption, 
                  "Direct input to output bias.\n");
    declareOption(ol, "wout_dist_rep", 
                  &NeuralProbabilisticLanguageModel::wout_dist_rep, 
                  OptionBase::learntoption, 
                  "Weights of output layer for distributed representation "
                  "predictor.\n");
    declareOption(ol, "bout_dist_rep", 
                  &NeuralProbabilisticLanguageModel::bout_dist_rep, 
                  OptionBase::learntoption, 
                  "Bias of output layer for distributed representation "
                  "predictor.\n");

    inherited::declareOptions(ol);

}

Here is the call graph for this function:

static const PPath& PLearn::NeuralProbabilisticLanguageModel::declaringFile ( ) [inline, static]

Reimplemented from PLearn::PLearner.

Definition at line 311 of file NeuralProbabilisticLanguageModel.h.

:
    static void declareOptions(OptionList& ol);
NeuralProbabilisticLanguageModel * PLearn::NeuralProbabilisticLanguageModel::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::PLearner.

Definition at line 60 of file NeuralProbabilisticLanguageModel.cc.

void PLearn::NeuralProbabilisticLanguageModel::fillWeights ( const Mat weights) [protected]

Fill a matrix of weights according to the 'initialization_method' specified.

Definition at line 1037 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::TMat< T >::clear(), initialization_method, PLearn::TMat< T >::length(), rgen, and PLearn::sqrt().

Referenced by initializeParams().

                                                                     {
    if (initialization_method == "zero") {
        weights.clear();
        return;
    }
    real delta;
    int is = weights.length();
    if (initialization_method.find("linear") != string::npos)
        delta = 1.0 / real(is);
    else
        delta = 1.0 / sqrt(real(is));
    if (initialization_method.find("normal") != string::npos)
        rgen->fill_random_normal(weights, 0, delta);
    else
        rgen->fill_random_uniform(weights, -delta, delta);
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::forget ( ) [virtual]

*** SUBCLASS WRITING: ***

(Re-)initializes the PLearner in its fresh state (that state may depend on the 'seed' option) and sets 'stage' back to 0 (this is the stage of a fresh learner!)

A typical forget() method should do the following:

  • initialize the learner's parameters, using this random generator
  • stage = 0;

This method is typically called by the build_() method, after it has finished setting up the parameters, and if it deemed useful to set or reset the learner in its fresh state. (remember build may be called after modifying options that do not necessarily require the learner to restart from a fresh state...) forget is also called by the setTrainingSet method, after calling build(), so it will generally be called TWICE during setTrainingSet!

Reimplemented from PLearn::PLearner.

Definition at line 1057 of file NeuralProbabilisticLanguageModel.cc.

References build(), PLearn::PLearner::stage, total_updates, and PLearn::PLearner::train_set.

{
    if (train_set) build();
    total_updates=0;
    stage = 0;
}

Here is the call graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::fprop ( const Vec inputv,
Vec outputv,
const Vec targetv,
Vec costsv,
real  sampleweight = 1 
) const [protected]

Forward propagation in the network.

Definition at line 449 of file NeuralProbabilisticLanguageModel.cc.

References fpropCostsFromOutput(), and fpropOutput().

Referenced by train(), verify_gradient(), and verify_gradient_affine_transform().

{
    
    fpropOutput(inputv,outputv);
    //if(is_missing(outputv[0]))
    //    cout << "What the fuck" << endl;
    fpropCostsFromOutput(inputv, outputv, targetv, costsv, sampleweight);
    //if(is_missing(costsv[0]))
    //    cout << "Re-What the fuck" << endl;

}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::fpropBeforeOutputWeights ( const Vec inputv) const [protected]

Forward propagation until output weights are reached (called by fpropOutput(...) and importance_sampling_gradient_update(...)

Definition at line 499 of file NeuralProbabilisticLanguageModel.cc.

References add_affine_transform(), add_transfer_func(), b1, b2, bout_dist_rep, PLearn::TVec< T >::data(), dist_rep_dim, f, feat_input, feat_sets, feats, hidden2v, hiddenv, i, ifeats, PLearn::PLearner::inputsize_, j, last_layer, PLearn::TVec< T >::length(), n_feat_sets, nfeats, nhidden, nhidden2, ni, nj, nnet_input, offset, possible_targets_vary, PLearn::TVec< T >::resize(), row, PLearn::TVec< T >::size(), str, PLearn::TVec< T >::subVec(), target_values, target_values_reference_set, val_string_reference_set, w1, w2, and wout_dist_rep.

Referenced by fpropOutput(), and importance_sampling_gradient_update().

{
    // Get possible target values
    if(possible_targets_vary) 
    {
        row.subVec(0,inputsize_) << inputv;
        target_values_reference_set->getValues(row,inputsize_,target_values);
        outputv.resize(target_values.length());
    }

    // Get features
    ni = inputsize_;
    nfeats = 0;
    for(int i=0; i<ni; i++)
    {
        str = val_string_reference_set->getValString(i,inputv[i]);
        feat_sets[i%n_feat_sets]->getFeatures(str,feats[i]);
        nfeats += feats[i].length();
    }
    
    feat_input.resize(nfeats);
    offset = 0;
    id = 0;
    for(int i=0; i<ni; i++)
    {
        f = feats[i].data();
        nj = feats[i].length();
        for(int j=0; j<nj; j++)
            feat_input[id++] = offset + *f++;
        if(dist_rep_dim <= 0 || ((i+1) % n_feat_sets != 0))
            offset += feat_sets[i % n_feat_sets]->size();
        else
            offset = 0;
    }

    // Fprop up to output weights
    if(dist_rep_dim > 0) // x -> d(x)
    {        
        nfeats = 0;
        id = 0;
        for(int i=0; i<inputsize_;)
        {
            ifeats = 0;
            for(int j=0; j<n_feat_sets; j++,i++)
                ifeats += feats[i].length();
            
            add_affine_transform(feat_input.subVec(nfeats,ifeats),
                                 wout_dist_rep, bout_dist_rep,
                                 nnet_input.subVec(id*dist_rep_dim,dist_rep_dim),
                                      true, false);
            nfeats += ifeats;
            id++;
        }

        if(nhidden>0) // d(x) -> h1(d(x))
        {
            add_affine_transform(nnet_input,w1,b1,hiddenv,false,false);
            add_transfer_func(hiddenv);

            if(nhidden2>0) // h1(d(x)) -> h2(h1(d(x)))
            {
                add_affine_transform(hiddenv,w2,b2,hidden2v,false,false);
                add_transfer_func(hidden2v);
                last_layer = hidden2v;
            }
            else
                last_layer = hiddenv;
        }
        else
            last_layer = nnet_input;

    }
    else
    {        
        if(nhidden>0) // x -> h1(x)
        {
            add_affine_transform(feat_input,w1,b1,hiddenv,true,false);
            // Transfert function
            add_transfer_func(hiddenv);

            if(nhidden2>0) // h1(x) -> h2(h1(x))
            {
                add_affine_transform(hiddenv,w2,b2,hidden2v,true,false);
                add_transfer_func(hidden2v);
                last_layer = hidden2v;
            }
            else
                last_layer = hiddenv;
        }
        else
            last_layer = feat_input;
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::fpropCostsFromOutput ( const Vec inputv,
const Vec outputv,
const Vec targetv,
Vec costsv,
real  sampleweight = 1 
) const [protected]

Forward propagation to compute the costs from the output.

Definition at line 594 of file NeuralProbabilisticLanguageModel.cc.

References classification_loss(), cost_funcs, PLearn::TVec< T >::find(), nll(), PLERROR, possible_targets_vary, reind_target, PLearn::TVec< T >::size(), and target_values.

Referenced by fprop().

{
    //Compute cost

    if(possible_targets_vary)
    {
        reind_target = target_values.find(targetv[0]);
        if(reind_target<0)
            PLERROR("In NeuralProbabilisticLanguageModel::fprop(): target %d is not in possible targets", targetv[0]);
    }
    else
        reind_target = (int)targetv[0];

    // Build cost function

    int ncosts = cost_funcs.size();
    for(int k=0; k<ncosts; k++)
    {
        if(cost_funcs[k]=="NLL") 
        {
            costsv[k] = sampleweight*nll(outputv,reind_target);
        }
        else if(cost_funcs[k]=="class_error")
            costsv[k] = sampleweight*classification_loss(outputv, reind_target);
        else 
            PLERROR("In NeuralProbabilisticLanguageModel::fprop(): "
                    "unknown cost_func option: %s",cost_funcs[k].c_str());        
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::fpropOutput ( const Vec inputv,
Vec outputv 
) const [protected]

Forward propagation to compute the output.

Definition at line 463 of file NeuralProbabilisticLanguageModel.cc.

References add_affine_transform(), add_transfer_func(), bout, direct_bout, direct_in_to_out, direct_wout, dist_rep_dim, feat_input, fpropBeforeOutputWeights(), last_layer, nhidden, nhidden2, nnet_input, output_transfer_func, PLERROR, possible_targets_vary, target_values, and wout.

Referenced by computeOutput(), and fprop().

{
    // Forward propagation until reaches output weights, sets last_layer
    fpropBeforeOutputWeights(inputv);
    
    if(dist_rep_dim > 0) // x -> d(x)
    {        
        // d(x),h1(d(x)),h2(h1(d(x))) -> o(x)

        add_affine_transform(last_layer,wout,bout,outputv,false,
                             possible_targets_vary,target_values);            
        if(direct_in_to_out && nhidden>0)
            add_affine_transform(nnet_input,direct_wout,direct_bout,
                                 outputv,false,possible_targets_vary,
                                 target_values);
    }
    else
    {
        // x, h1(x),h2(h1(x)) -> o(x)
        add_affine_transform(last_layer,wout,bout,outputv,nhidden<=0,
                             possible_targets_vary,target_values);            
        if(direct_in_to_out && nhidden>0)
            add_affine_transform(feat_input,direct_wout,direct_bout,
                                 outputv,true,possible_targets_vary,
                                 target_values);
    }
                               
    if (nhidden2>0 && nhidden<=0)
        PLERROR("NeuralProbabilisticLanguageModel::fprop(): "
                "can't have nhidden2 (=%d) > 0 while nhidden=0",nhidden2);
    
    if(output_transfer_func!="" && output_transfer_func!="none")
       add_transfer_func(outputv, output_transfer_func);
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::getNegativeEnergyValues ( Vec  samples,
Vec  neg_energies 
) [protected]

Gives scalar negative energy values for some samples (words).

Assumes fpropBeforeOutputWeights has been called previously.

Definition at line 2114 of file NeuralProbabilisticLanguageModel.cc.

References add_affine_transform(), bout, direct_bout, direct_in_to_out, direct_wout, dist_rep_dim, feat_input, last_layer, nhidden, nnet_input, and wout.

Referenced by importance_sampling_gradient_update().

{
    if(dist_rep_dim > 0) // x -> d(x)
    {        
        // d(x),h1(d(x)),h2(h1(d(x))) -> o(x)

        add_affine_transform(last_layer,wout,bout,neg_energies,false,
                             true,samples);            
        if(direct_in_to_out && nhidden>0)
            add_affine_transform(nnet_input,direct_wout,direct_bout,
                                 neg_energies,false,true,
                                 samples);
    }
    else
    {
        // x, h1(x),h2(h1(x)) -> o(x)
        add_affine_transform(last_layer,wout,bout,samples,nhidden<=0,
                             true,samples);            
        if(direct_in_to_out && nhidden>0)
            add_affine_transform(feat_input,direct_wout,direct_bout,
                                 neg_energies,true,true,
                                 samples);
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

OptionList & PLearn::NeuralProbabilisticLanguageModel::getOptionList ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 60 of file NeuralProbabilisticLanguageModel.cc.

OptionMap & PLearn::NeuralProbabilisticLanguageModel::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 60 of file NeuralProbabilisticLanguageModel.cc.

RemoteMethodMap & PLearn::NeuralProbabilisticLanguageModel::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 60 of file NeuralProbabilisticLanguageModel.cc.

TVec< string > PLearn::NeuralProbabilisticLanguageModel::getTestCostNames ( ) const [virtual]

*** SUBCLASS WRITING: ***

This should return the names of the costs computed by computeCostsFromOutputs.

Implements PLearn::PLearner.

Definition at line 1075 of file NeuralProbabilisticLanguageModel.cc.

References cost_funcs.

{ 
    return cost_funcs;
}
TVec< string > PLearn::NeuralProbabilisticLanguageModel::getTrainCostNames ( ) const [virtual]

*** SUBCLASS WRITING: ***

This should return the names of the objective costs that the train method computes and for which it updates the VecStatsCollector train_stats.

Implements PLearn::PLearner.

Definition at line 1067 of file NeuralProbabilisticLanguageModel.cc.

References cost_funcs.

Referenced by train(), and verify_gradient().

{
    return cost_funcs;
}

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::gradient_affine_transform ( Vec  input,
Mat  weights,
Vec  bias,
Vec  ginput,
Mat  gweights,
Vec  gbias,
Vec  goutput,
bool  input_is_sparse,
bool  output_is_sparse,
real  learning_rate,
real  weight_decay,
real  bias_decay,
Vec  output_indices = Vec(0) 
) [protected]

Propagate gradient through affine transform on input using provided weights and bias.

Information about the nature of the input and output need to be provided. If bias.length() == 0, then no backprop is made to bias.

Definition at line 1287 of file NeuralProbabilisticLanguageModel.cc.

References bias_decay, PLearn::TVec< T >::data(), PLearn::fast_exact_is_equal(), i, j, PLearn::TVec< T >::length(), ni, nj, penalty_type, pval1, pval2, pval3, pval4, pval5, PLearn::two(), val, val2, and weight_decay.

Referenced by bprop(), and importance_sampling_gradient_update().

{
    // Bias
    if(bias.length() != 0)
    {
        if(output_is_sparse)
        {
            pval1 = gbias.data();
            pval2 = goutput.data();
            pval3 = output_indices.data();
            ni = goutput.length();
            
            if(fast_exact_is_equal(bias_decay, 0))
            {
                // Without bias decay
                for(int i=0; i<ni; i++)
                    pval1[(int)*pval3++] += *pval2++;
            }
            else
            {
                // With bias decay
                if(penalty_type == "L2_square")
                {
                    pval4 = bias.data();
                    val = -two(learning_rate)*bias_decay;
                    for(int i=0; i<ni; i++)
                    {
                        pval1[(int)*pval3] += *pval2++ + val*(pval4[(int)*pval3]);
                        pval3++;
                    }
                }
                else if(penalty_type == "L1")
                {
                    pval4 = bias.data();
                    val = -learning_rate*bias_decay;
                    for(int i=0; i<ni; i++)
                    {
                        val2 = pval4[(int)*pval3];
                        if(val2 > 0 )
                            pval1[(int)*pval3] += *pval2 + val;
                        else if(val2 < 0)
                            pval1[(int)*pval3] += *pval2 - val;
                        pval2++;
                        pval3++;
                    }
                }
            }
        }
        else
        {
            pval1 = gbias.data();
            pval2 = goutput.data();
            ni = goutput.length();
            if(fast_exact_is_equal(bias_decay, 0))
            {
                // Without bias decay
                for(int i=0; i<ni; i++)
                    *pval1++ += *pval2++;
            }
            else
            {
                // With bias decay
                if(penalty_type == "L2_square")
                {
                    pval3 = bias.data();
                    val = -two(learning_rate)*bias_decay;
                    for(int i=0; i<ni; i++)
                    {
                        *pval1++ += *pval2++ + val * (*pval3++);
                    }
                }
                else if(penalty_type == "L1")
                {
                    pval3 = bias.data();
                    val = -learning_rate*bias_decay;
                    for(int i=0; i<ni; i++)
                    {
                        if(*pval3 > 0)
                            *pval1 += *pval2 + val;
                        else if(*pval3 < 0)
                            *pval1 += *pval2 - val;
                        pval1++;
                        pval2++;
                        pval3++;
                    }
                }
            }
        }
    }

    // Weights and input (when appropriate)
    if(!input_is_sparse && !output_is_sparse)
    {        
        // Input
        //productAcc(ginput, weights, goutput);
        // Weights
        //externalProductAcc(gweights, input, goutput);

        // Faster code to do this, which limits the accesses
        // to memory

        ni = input.length();
        nj = goutput.length();
        pval3 = ginput.data();
        pval5 = input.data();
        
        if(fast_exact_is_equal(weight_decay, 0))
        {
            // Without weight decay
            for(int i=0; i<ni; i++) {
                
                pval1 = goutput.data();
                pval2 = weights[i];
                pval4 = gweights[i];
                for(int j=0; j<nj; j++) {
                    *pval3 += *pval2 * (*pval1);
                    *pval4 += *pval5 * (*pval1);
                    pval1++;
                    pval2++;
                    pval4++;
                }
                pval3++;
                pval5++;
            }   
        }
        else
        {
            //With weight decay            
            if(penalty_type == "L2_square")
            {
                val = -two(learning_rate)*weight_decay;
                for(int i=0; i<ni; i++) {   
                    pval1 = goutput.data();
                    pval2 = weights[i];
                    pval4 = gweights[i];
                    for(int j=0; j<nj; j++) {
                        *pval3 += *pval2 * (*pval1);
                        *pval4 += *pval5 * (*pval1) + val * (*pval2);
                        pval1++;
                        pval2++;
                        pval4++;
                    }
                    pval3++;
                    pval5++;
                }
            }
            else if(penalty_type == "L1")
            {
                val = -learning_rate*weight_decay;
                for(int i=0; i<ni; i++) {
                    
                    pval1 = goutput.data();
                    pval2 = weights[i];
                    pval4 = gweights[i];
                    for(int j=0; j<nj; j++) {
                        *pval3 += *pval2 * (*pval1);
                        if(*pval2 > 0)
                            *pval4 += *pval5 * (*pval1) + val;
                        else if(*pval2 < 0)
                            *pval4 += *pval5 * (*pval1) - val;
                        pval1++;
                        pval2++;
                        pval4++;
                    }
                    pval3++;
                    pval5++;
                }
            }
        }
    }
    else if(!input_is_sparse && output_is_sparse)
    {
        ni = goutput.length();
        nj = input.length();
        pval1 = goutput.data();
        pval3 = output_indices.data();
        
        if(fast_exact_is_equal(weight_decay, 0))
        {
            // Without weight decay
            for(int i=0; i<ni; i++)
            {
                pval2 = input.data();
                pval4 = ginput.data();
                for(int j=0; j<nj; j++)
                {
                    // Input
                    *pval4++ += weights(j,(int)(*pval3))*(*pval1);
                    // Weights
                    gweights(j,(int)(*pval3)) += (*pval2++)*(*pval1);
                }
                pval1++;
                pval3++;
            }
        }
        else
        {
            // With weight decay
            if(penalty_type == "L2_square")
            {
                val = -two(learning_rate)*weight_decay;
                for(int i=0; i<ni; i++)
                {
                    pval2 = input.data();
                    pval4 = ginput.data();
                    for(int j=0; j<nj; j++)
                    {
                        val2 = weights(j,(int)(*pval3));
                        // Input
                        *pval4++ += val2*(*pval1);
                        // Weights
                        gweights(j,(int)(*pval3)) += (*pval2++)*(*pval1) 
                            + val*val2;
                    }
                    pval1++;
                    pval3++;
                }
            }
            else if(penalty_type == "L1")
            {
                val = -learning_rate*weight_decay;
                for(int i=0; i<ni; i++)
                {
                    pval2 = input.data();
                    pval4 = ginput.data();
                    for(int j=0; j<nj; j++)
                    {
                        val2 = weights(j,(int)(*pval3));
                        // Input
                        *pval4++ += val2*(*pval1);
                        // Weights
                        if(val2 > 0)
                            gweights(j,(int)(*pval3)) += (*pval2)*(*pval1) + val;
                        else if(val2 < 0)
                            gweights(j,(int)(*pval3)) += (*pval2)*(*pval1) - val;
                        pval2++;
                    }
                    pval1++;
                    pval3++;
                }
            }
        }
    }
    else if(input_is_sparse && !output_is_sparse)
    {
        ni = input.length();
        nj = goutput.length();

        if(fast_exact_is_equal(weight_decay, 0))
        {
            // Without weight decay
            if(ni != 0)
            {
                pval3 = input.data();
                for(int i=0; i<ni; i++)
                {
                    pval1 = goutput.data();
                    pval2 = gweights[(int)(*pval3++)];
                    for(int j=0; j<nj;j++)
                        *pval2++ += *pval1++;
                }
            }
        }
        else
        {
            // With weight decay
            if(penalty_type == "L2_square")
            {
                if(ni != 0)
                {
                    pval3 = input.data();                    
                    val = -two(learning_rate)*weight_decay;
                    for(int i=0; i<ni; i++)
                    {
                        pval1 = goutput.data();
                        pval2 = gweights[(int)(*pval3)];
                        pval4 = weights[(int)(*pval3++)];
                        for(int j=0; j<nj;j++)
                        {
                            *pval2++ += *pval1++ + val * (*pval4++);
                        }
                    }
                }
            }
            else if(penalty_type == "L1")
            {
                if(ni != 0)
                {
                    pval3 = input.data();
                    val = learning_rate*weight_decay;
                    for(int i=0; i<ni; i++)
                    {
                        pval1 = goutput.data();
                        pval2 = gweights[(int)(*pval3)];
                        pval4 = weights[(int)(*pval3++)];
                        for(int j=0; j<nj;j++)
                        {
                            if(*pval4 > 0)
                                *pval2 += *pval1 + val;
                            else if(*pval4 < 0)
                                *pval2 += *pval1 - val;
                            pval1++;
                            pval2++;
                            pval4++;
                        }
                    }
                }
            }
        }
    }
    else if(input_is_sparse && output_is_sparse)
    {
        ni = input.length();
        nj = goutput.length();

        if(fast_exact_is_equal(weight_decay, 0))
        {
            // Without weight decay
            if(ni != 0)
            {
                pval2 = input.data();
                for(int i=0; i<ni; i++)
                {
                    pval1 = goutput.data();
                    pval3 = output_indices.data();
                    for(int j=0; j<nj; j++)
                        gweights((int)(*pval2),(int)*pval3++) += *pval1++;
                    pval2++;
                }
            }
        }
        else
        {
            // With weight decay
            if(penalty_type == "L2_square")
            {
                if(ni != 0)
                {
                    pval2 = input.data();
                    val = -two(learning_rate)*weight_decay;                    
                    for(int i=0; i<ni; i++)
                    {
                        pval1 = goutput.data();
                        pval3 = output_indices.data();
                        for(int j=0; j<nj; j++)
                        {
                            gweights((int)(*pval2),(int)*pval3) 
                                += *pval1++ 
                                + val * weights((int)(*pval2),(int)*pval3);
                            pval3++;
                        }
                        pval2++;
                    }
                }
            }
            else if(penalty_type == "L1")
            {
                if(ni != 0)
                {
                    pval2 = input.data();
                    val = -learning_rate*weight_decay;                    
                    for(int i=0; i<ni; i++)
                    {
                        pval1 = goutput.data();
                        pval3 = output_indices.data();
                        for(int j=0; j<nj; j++)
                        {
                            val2 = weights((int)(*pval2),(int)*pval3);
                            if(val2 > 0)
                                gweights((int)(*pval2),(int)*pval3) 
                                    += *pval1 + val;
                            else if(val2 < 0)
                                gweights((int)(*pval2),(int)*pval3) 
                                    += *pval1 - val;
                            pval1++;
                            pval3++;
                        }
                        pval2++;
                    }
                }
            }
        }
    }

//    gradient_penalty(input,weights,bias,gweights,gbias,input_is_sparse,output_is_sparse,
//                     learning_rate,weight_decay,bias_decay,output_indices);
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::gradient_penalty ( Vec  input,
Mat  weights,
Vec  bias,
Mat  gweights,
Vec  gbias,
bool  input_is_sparse,
bool  output_is_sparse,
real  learning_rate,
real  weight_decay,
real  bias_decay,
Vec  output_indices = Vec(0) 
) [protected]

Propagate penalty gradient through weights and bias, scaled by -learning rate.

Definition at line 1682 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::TMat< T >::begin(), bias_decay, PLearn::TMat< T >::compact_begin(), PLearn::TMat< T >::compact_end(), PLearn::TVec< T >::data(), PLearn::TMat< T >::end(), PLearn::fast_exact_is_equal(), i, PLearn::TMat< T >::isCompact(), j, PLearn::TVec< T >::length(), PLearn::multiplyAcc(), ni, nj, penalty_type, pval1, pval2, pval3, PLearn::two(), val, val2, and weight_decay.

{
    // Bias
    if(!fast_exact_is_equal(bias_decay, 0) && !fast_exact_is_equal(bias.length(),
                                                                   0) )
    {
        if(output_is_sparse)
        {
            pval1 = gbias.data();
            pval2 = bias.data();
            pval3 = output_indices.data();
            ni = output_indices.length();            
            if(penalty_type == "L2_square")
            {
                val = -two(learning_rate)*bias_decay;
                for(int i=0; i<ni; i++)
                {
                    pval1[(int)*pval3] += val*(pval2[(int)*pval3]);
                    pval3++;
                }
            }
            else if(penalty_type == "L1")
            {
                val = -learning_rate*bias_decay;
                for(int i=0; i<ni; i++)
                {
                    val2 = pval2[(int)*pval3];
                    if(val2 > 0 )
                        pval1[(int)*pval3++] += val;
                    else if(val2 < 0)
                        pval1[(int)*pval3++] -= val;
                }
            }
        }
        else
        {
            pval1 = gbias.data();
            pval2 = bias.data();
            ni = output_indices.length();            
            if(penalty_type == "L2_square")
            {
                val = -two(learning_rate)*bias_decay;
                for(int i=0; i<ni; i++)
                    *pval1++ += val*(*pval2++);
            }
            else if(penalty_type == "L1")
            {
                val = -learning_rate*bias_decay;
                for(int i=0; i<ni; i++)
                {
                    if(*pval2 > 0)
                        *pval1 += val;
                    else if(*pval2 < 0)
                        *pval1 -= val;
                    pval1++;
                    pval2++;
                }
            }
        }
    }

    // Weights
    if(!fast_exact_is_equal(weight_decay, 0))
    {
        if(!input_is_sparse && !output_is_sparse)
        {      
            if(penalty_type == "L2_square")
            {
                multiplyAcc(gweights, weights,-two(learning_rate)*weight_decay);
            }
            else if(penalty_type == "L1")
            {
                val = -learning_rate*weight_decay;
                if(gweights.isCompact() && weights.isCompact())
                {
                    Mat::compact_iterator itm = gweights.compact_begin();
                    Mat::compact_iterator itmend = gweights.compact_end();
                    Mat::compact_iterator itx = weights.compact_begin();
                    for(; itm!=itmend; ++itm, ++itx)
                    {
                        if(*itx > 0)
                            *itm += val;
                        else if(*itx < 0)
                            *itm -= val;
                    }
                }
                else // use non-compact iterators
                {
                    Mat::iterator itm = gweights.begin();
                    Mat::iterator itmend = gweights.end();
                    Mat::iterator itx = weights.begin();
                    for(; itm!=itmend; ++itm, ++itx)
                    {
                        if(*itx > 0)
                            *itm += val;
                        else if(*itx < 0)
                            *itm -= val;
                    }
                }
            }
        }
        else if(!input_is_sparse && output_is_sparse)
        {
            ni = output_indices.length();
            nj = input.length();
            pval1 = output_indices.data();

            if(penalty_type == "L2_square")
            {
                val = -two(learning_rate)*weight_decay;
                for(int i=0; i<ni; i++)
                {
                    for(int j=0; j<nj; j++)
                    {
                        gweights(j,(int)(*pval1)) += val * 
                            weights(j,(int)(*pval1));
                    }
                    pval1++;
                }
            }
            else if(penalty_type == "L1")
            {
                val = -learning_rate*weight_decay;
                for(int i=0; i<ni; i++)
                {
                    for(int j=0; j<nj; j++)
                    {
                        val2 = weights(j,(int)(*pval1));
                        if(val2 > 0)
                            gweights(j,(int)(*pval1)) +=  val;
                        else if(val2 < 0)
                            gweights(j,(int)(*pval1)) -=  val;
                    }
                    pval1++;
                }
            }
        }
        else if(input_is_sparse && !output_is_sparse)
        {
            ni = input.length();
            nj = output_indices.length();
            if(ni != 0)
            {
                pval3 = input.data();
                if(penalty_type == "L2_square")
                {
                    val = -two(learning_rate)*weight_decay;
                    for(int i=0; i<ni; i++)
                    {
                        pval1 = weights[(int)(*pval3)];
                        pval2 = gweights[(int)(*pval3++)];
                        for(int j=0; j<nj;j++)
                            *pval2++ += val * *pval1++;
                    }
                }
                else if(penalty_type == "L1")
                {
                    val = -learning_rate*weight_decay;
                    for(int i=0; i<ni; i++)
                    {
                        pval1 = weights[(int)(*pval3)];
                        pval2 = gweights[(int)(*pval3++)];
                        for(int j=0; j<nj;j++)
                        {
                            if(*pval1 > 0)
                                *pval2 += val;
                            else if(*pval1 < 0)
                                *pval2 -= val;
                            pval2++;
                            pval1++;
                        }
                    }                
                }
            }
        }
        else if(input_is_sparse && output_is_sparse)
        {
            ni = input.length();
            nj = output_indices.length();
            if(ni != 0)
            {
                pval1 = input.data();
                if(penalty_type == "L2_square")
                {
                    val = -two(learning_rate)*weight_decay;
                    for(int i=0; i<ni; i++)
                    {
                        pval2 = output_indices.data();
                        for(int j=0; j<nj; j++)
                        {
                            gweights((int)(*pval1),(int)*pval2) += val*
                                weights((int)(*pval1),(int)*pval2);
                        pval2++;
                        }
                        pval1++;
                    }
                }
                else if(penalty_type == "L1")
                {
                    val = -learning_rate*weight_decay;
                    for(int i=0; i<ni; i++)
                    {
                        pval2 = output_indices.data();
                        for(int j=0; j<nj; j++)
                        {
                            val2 = weights((int)(*pval1),(int)*pval2);
                            if(val2 > 0)
                                gweights((int)(*pval1),(int)*pval2) += val;
                            else if(val2 < 0)
                                gweights((int)(*pval1),(int)*pval2) -= val;
                            pval2++;
                        }
                        pval1++;
                    }
                    
                }
            }
        }
    }
}

Here is the call graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::gradient_transfer_func ( Vec output,
Vec gradient_input,
Vec gradient_output,
string  transfer_func = "default",
int  nll_softmax_speed_up_target = -1 
) [protected]

Computes the gradient through the given activation function, the output value and the initial gradient on that output (i.e.

before the activation function). After calling this function, gradient_act_output corresponds the gradient after the activation function. nll_softmax_speed_up_target is to speed up the gradient computation for the output layer using the softmax transfert function and the NLL cost function is applied.

Definition at line 1113 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::TVec< T >::data(), grad, hidden_transfer_func, i, PLearn::TVec< T >::length(), ni, nk, PLERROR, pval1, pval2, pval3, PLearn::square(), and val.

Referenced by bprop(), and importance_sampling_gradient_update().

{
    if (transfer_func == "default")        
        transfer_func = hidden_transfer_func;
    if(transfer_func=="linear")
    {
        pval1 = gradient_output.data();
        pval2 = gradient_input.data();
        ni = output.length();
        for(int i=0; i<ni; i++)
            *pval2++ += *pval1++;
        return;
    }
    else if(transfer_func=="tanh")
    {
        pval1 = gradient_output.data();
        pval2 = output.data();
        pval3 = gradient_input.data();
        ni = output.length();
        for(int i=0; i<ni; i++)
            *pval3++ += (*pval1++)*(1.0-square(*pval2++));
        return;
    }        
    else if(transfer_func=="sigmoid")
    {
        pval1 = gradient_output.data();
        pval2 = output.data();
        pval3 = gradient_input.data();
        ni = output.length();
        for(int i=0; i<ni; i++)
        {
            *pval3++ += (*pval1++)*(*pval2)*(1.0-*pval2);
            pval2++;
        }   
        return;
    }
    else if(transfer_func=="softmax")
    {
        if(nll_softmax_speed_up_target<0)
        {            
            pval3 = gradient_input.data();
            ni = nk = output.length();
            for(int i=0; i<ni; i++)
            {
                val = output[i];
                pval1 = gradient_output.data();
                pval2 = output.data();
                for(int k=0; k<nk; k++)
                    if(k!=i)
                        *pval3 -= *pval1++ * val * (*pval2++);
                    else
                    {
                        *pval3 += *pval1++ * val * (1.0-val);
                        pval2++;
                    }
                pval3++;                
            }   
        }
        else // Permits speedup and avoids numerical precision errors
        {
            pval2 = output.data();
            pval3 = gradient_input.data();
            ni = output.length();
            grad = gradient_output[nll_softmax_speed_up_target];
            val = output[nll_softmax_speed_up_target];
            for(int i=0; i<ni; i++)
            {
                if(nll_softmax_speed_up_target!=i)
                    //*pval3++ -= grad * val * (*pval2++);
                    *pval3++ -= grad * (*pval2++);
                else
                {
                    //*pval3++ += grad * val * (1.0-val);
                    *pval3++ += grad * (1.0-val);
                    pval2++;
                }
            }   
        }
        return;
    }
    else PLERROR("In NeuralProbabilisticLanguageModel::gradient_transfer_func():"
                 "Unknown value for transfer_func: %s",transfer_func.c_str());
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::importance_sampling_gradient_update ( Vec inputv,
Vec targetv,
real  learning_rate,
int  n_samples,
real  train_sample_weight = 1 
) [protected]

Update the neural network parameters using the importance sampling estimate of the gradient, based on n_samples of the proposal distribution.

Definition at line 1909 of file NeuralProbabilisticLanguageModel.cc.

References b1, b2, bias_decay, bout, bout_dist_rep, clearProppathGradient(), PLearn::TVec< T >::data(), densities, direct_bout, direct_in_to_out, direct_in_to_out_weight_decay, direct_wout, dist_rep_dim, PLearn::exp(), feat_input, feats, fpropBeforeOutputWeights(), generated_samples, getNegativeEnergyValues(), gradient_act_hidden2v, gradient_act_hiddenv, gradient_affine_transform(), gradient_b1, gradient_b2, gradient_bout, gradient_bout_dist_rep, gradient_direct_bout, gradient_direct_wout, gradient_feat_input, gradient_hidden2v, gradient_hiddenv, gradient_last_layer, gradient_nnet_input, gradient_transfer_func(), gradient_w1, gradient_w2, gradient_wout, gradient_wout_dist_rep, hidden2v, hiddenv, i, ifeats, importance_sampling_ratios, PLearn::PLearner::inputsize_, j, layer1_bias_decay, layer1_weight_decay, layer2_bias_decay, layer2_weight_decay, PLearn::TVec< T >::length(), n_feat_sets, neg_energies, nfeats, nhidden, nhidden2, nnet_input, output_layer_bias_decay, output_layer_dist_rep_bias_decay, output_layer_dist_rep_weight_decay, output_layer_weight_decay, proposal_distribution, pval1, pval2, pval3, PLearn::TVec< T >::resize(), sample, stochastic_gradient_descent_speedup, PLearn::TVec< T >::subVec(), PLearn::sum(), update(), w1, w2, weight_decay, wout, and wout_dist_rep.

Referenced by train().

{
    // TODO: implement NGramDistribution::generate()
    //       adjust deepcopy(...)

    // Do forward propagation that is common to all computations
    fpropBeforeOutputWeights(inputv);

    // Generate the n_samples samples from proposal_distribution
    generated_samples.resize(n_samples+1);
    densities.resize(n_samples);
    
    proposal_distribution->setPredictor(inputv);
    pval1 = generated_samples.data();
    pval2 = sample.data();
    pval3 = densities.data();
    for(int i=0; i<n_samples; i++)
    {
        proposal_distribution->generate(sample);        
        *pval1++ = *pval2;
        *pval3++ = proposal_distribution->density(sample);        
    }

    real sum = 0;
    generated_samples[n_samples] = targetv[0];
    neg_energies.resize(n_samples+1);
    getNegativeEnergyValues(generated_samples, neg_energies);
    
    importance_sampling_ratios.resize(
        importance_sampling_ratios.length() + n_samples);
    pval1 = importance_sampling_ratios.subVec(
        importance_sampling_ratios.length() - n_samples).data();
    pval2 = neg_energies.data();
    pval3 = densities.data();
    for(int i=0; i<n_samples; i++)
    {
        *pval1 = exp(*pval2++)/ (*pval3++);
        sum += *pval1;
    }

    // Compute importance sampling estimate of the gradient

    // Training sample contribution...
    gradient_last_layer.resize(1);
    gradient_last_layer[0] = learning_rate*train_sample_weight;

    if(nhidden2 > 0) {
        gradient_affine_transform(hidden2v, wout, bout, gradient_hidden2v, 
                                  gradient_wout, gradient_bout, 
                                  gradient_last_layer,
                                  false, true, learning_rate*train_sample_weight, 
                                  weight_decay+output_layer_weight_decay,
                                  bias_decay+output_layer_bias_decay,
                                  generated_samples.subVec(n_samples,1));
    }
    else if(nhidden > 0) 
    {
        gradient_affine_transform(hiddenv, wout, bout, gradient_hiddenv,
                                  gradient_wout, gradient_bout, 
                                  gradient_last_layer,
                                  false, true, learning_rate*train_sample_weight, 
                                  weight_decay+output_layer_weight_decay,
                                  bias_decay+output_layer_bias_decay, 
                                  generated_samples.subVec(n_samples,1));
    }
    else
    {
        gradient_affine_transform(nnet_input, wout, bout, gradient_nnet_input, 
                                  gradient_wout, gradient_bout, 
                                  gradient_last_layer,
                                  (dist_rep_dim <= 0), true, 
                                  learning_rate*train_sample_weight, 
                                  weight_decay+output_layer_weight_decay,
                                  bias_decay+output_layer_bias_decay, 
                                  generated_samples.subVec(n_samples,1));
    }


    if(nhidden>0 && direct_in_to_out)
    {
        gradient_affine_transform(nnet_input, direct_wout, direct_bout,
                                  gradient_nnet_input, 
                                  gradient_direct_wout, gradient_direct_bout,
                                  gradient_last_layer,
                                  dist_rep_dim<=0, true,
                                  learning_rate*train_sample_weight, 
                                  weight_decay+direct_in_to_out_weight_decay,
                                  0,
                                  generated_samples.subVec(n_samples,1));
    }

    // Importance sampling contributions
    for(int i=0; i<n_samples; i++)
    {
        gradient_last_layer.resize(1);
        gradient_last_layer[0] = -learning_rate*train_sample_weight*
            importance_sampling_ratios[i]/sum;

        if(nhidden2 > 0) {
            gradient_affine_transform(hidden2v, wout, bout, gradient_hidden2v, 
                                      gradient_wout, gradient_bout, 
                                      gradient_last_layer,
                                      false, true, 
                                      learning_rate*train_sample_weight, 
                                      weight_decay+output_layer_weight_decay,
                                      bias_decay+output_layer_bias_decay,
                                      generated_samples.subVec(i,1));
        }
        else if(nhidden > 0) 
        {
            gradient_affine_transform(hiddenv, wout, bout, gradient_hiddenv,
                                      gradient_wout, gradient_bout, 
                                      gradient_last_layer,
                                      false, true, 
                                      learning_rate*train_sample_weight, 
                                      weight_decay+output_layer_weight_decay,
                                      bias_decay+output_layer_bias_decay, 
                                      generated_samples.subVec(i,1));
        }
        else
        {
            gradient_affine_transform(nnet_input, wout, bout, 
                                      gradient_nnet_input, 
                                      gradient_wout, gradient_bout, 
                                      gradient_last_layer,
                                      (dist_rep_dim <= 0), true, 
                                      learning_rate*train_sample_weight, 
                                      weight_decay+output_layer_weight_decay,
                                      bias_decay+output_layer_bias_decay, 
                                      generated_samples.subVec(i,1));
        }


        if(nhidden>0 && direct_in_to_out)
        {
            gradient_affine_transform(nnet_input, direct_wout, direct_bout,
                                      gradient_nnet_input, 
                                      gradient_direct_wout, gradient_direct_bout,
                                      gradient_last_layer,
                                      dist_rep_dim<=0, true,
                                      learning_rate*train_sample_weight, 
                                      weight_decay+direct_in_to_out_weight_decay,
                                      0,
                                      generated_samples.subVec(i,1));
        }

    }

    // Propagate all contributions through rest of the network

    if(nhidden2 > 0)
    {
        gradient_transfer_func(hidden2v,gradient_act_hidden2v,gradient_hidden2v);
        gradient_affine_transform(hiddenv, w2, b2, gradient_hiddenv, 
                                  gradient_w2, gradient_b2, gradient_act_hidden2v,
                                  false, false,learning_rate*train_sample_weight, 
                                  weight_decay+layer2_weight_decay,
                                  bias_decay+layer2_bias_decay);
    }
    if(nhidden > 0)
    {
        gradient_transfer_func(hiddenv,gradient_act_hiddenv,gradient_hiddenv);  
        gradient_affine_transform(nnet_input, w1, b1, gradient_nnet_input, 
                                  gradient_w1, gradient_b1, gradient_act_hiddenv,
                                  dist_rep_dim<=0, false,learning_rate*train_sample_weight, 
                                  weight_decay+layer1_weight_decay,
                                  bias_decay+layer1_bias_decay);
    }

    if(dist_rep_dim > 0)
    {
        nfeats = 0;
        id = 0;
        for(int i=0; i<inputsize_; )
        {
            ifeats = 0;
            for(int j=0; j<n_feat_sets; j++,i++)
                ifeats += feats[i].length();
            gradient_affine_transform(feat_input.subVec(nfeats,ifeats),
                                      wout_dist_rep, bout_dist_rep,
                                      gradient_feat_input,// Useless anyways...
                                      gradient_wout_dist_rep,
                                      gradient_bout_dist_rep,
                                      gradient_nnet_input.subVec(
                                          id*dist_rep_dim,dist_rep_dim),
                                      true, false, 
                                      learning_rate*train_sample_weight, 
                                      weight_decay+
                                      output_layer_dist_rep_weight_decay,
                                      bias_decay
                                      +output_layer_dist_rep_bias_decay);
            nfeats += ifeats;
            id++;
        }
    }
    clearProppathGradient();

    // Update parameters and clear gradient
    if(!stochastic_gradient_descent_speedup)
        update();
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::initializeParams ( bool  set_seed = true) [protected, virtual]

Initialize the parameters.

If 'set_seed' is set to false, the seed will not be set in this method (it will be assumed to be already initialized according to the 'seed' option). The index of the extra task (-1 if main task) also needs to be provided.

Definition at line 2180 of file NeuralProbabilisticLanguageModel.cc.

References b1, b2, bout, bout_dist_rep, PLearn::TMat< T >::clear(), PLearn::TVec< T >::clear(), direct_bout, direct_in_to_out, direct_wout, dist_rep_dim, feat_input, feat_sets, fillWeights(), fixed_output_weights, gradient_act_hidden2v, gradient_act_hiddenv, gradient_act_outputv, gradient_b1, gradient_b2, gradient_bout, gradient_bout_dist_rep, gradient_direct_bout, gradient_direct_wout, gradient_hidden2v, gradient_hiddenv, gradient_nnet_input, gradient_outputv, gradient_w1, gradient_w2, gradient_wout, gradient_wout_dist_rep, hidden2v, hiddenv, i, PLearn::PLearner::inputsize_, n_feat_sets, nhidden, nhidden2, nnet_input, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), rgen, PLearn::PLearner::seed_, PLearn::TVec< T >::size(), total_feats_per_token, total_output_size, PLearn::TMat< T >::toVec(), PLearn::PLearner::train_set, w1, w2, wout, and wout_dist_rep.

Referenced by build_().

{
    if (set_seed) {
        if (seed_>=0)
            rgen->manual_seed(seed_);
    }


    PP<Dictionary> dict = train_set->getDictionary(inputsize_);
    total_output_size = dict->size();

    total_feats_per_token = 0;
    for(int i=0; i<n_feat_sets; i++)
        total_feats_per_token += feat_sets[i]->size();

    int nnet_inputsize;
    if(dist_rep_dim > 0)
    {
        wout_dist_rep.resize(total_feats_per_token,dist_rep_dim);
        bout_dist_rep.resize(dist_rep_dim);
        nnet_inputsize = dist_rep_dim*inputsize_/n_feat_sets;
        nnet_input.resize(nnet_inputsize);

        fillWeights(wout_dist_rep);
        bout_dist_rep.clear();

        gradient_wout_dist_rep.resize(total_feats_per_token,dist_rep_dim);
        gradient_bout_dist_rep.resize(dist_rep_dim);
        gradient_nnet_input.resize(nnet_inputsize);
        gradient_wout_dist_rep.clear();
        gradient_bout_dist_rep.clear();
        gradient_nnet_input.clear();
    }
    else
    {
        nnet_inputsize = total_feats_per_token*inputsize_/n_feat_sets;
        nnet_input = feat_input;
    }

    if(nhidden>0) 
    {
        w1.resize(nnet_inputsize,nhidden);
        b1.resize(nhidden);
        hiddenv.resize(nhidden);

        fillWeights(w1);
        b1.clear();

        gradient_w1.resize(nnet_inputsize,nhidden);
        gradient_b1.resize(nhidden);
        gradient_hiddenv.resize(nhidden);
        gradient_act_hiddenv.resize(nhidden);
        gradient_w1.clear();
        gradient_b1.clear();
        gradient_hiddenv.clear();
        gradient_act_hiddenv.clear();
        if(nhidden2>0) 
        {
            w2.resize(nhidden,nhidden2);
            b2.resize(nhidden2);
            hidden2v.resize(nhidden2);
            wout.resize(nhidden2,total_output_size);
            bout.resize(total_output_size);

            fillWeights(w2);
            b2.clear();

            gradient_w2.resize(nhidden,nhidden2);
            gradient_b2.resize(nhidden2);
            gradient_hidden2v.resize(nhidden2);
            gradient_act_hidden2v.resize(nhidden2);
            gradient_wout.resize(nhidden2,total_output_size);
            gradient_bout.resize(total_output_size);
            gradient_w2.clear();
            gradient_b2.clear();
            gradient_hidden2v.clear();
            gradient_act_hidden2v.clear();
            gradient_wout.clear();
            gradient_bout.clear();
        }
        else
        {
            wout.resize(nhidden,total_output_size);
            bout.resize(total_output_size);

            gradient_wout.resize(nhidden,total_output_size);
            gradient_bout.resize(total_output_size);
            gradient_wout.clear();
            gradient_bout.clear();
        }
            
        if(direct_in_to_out)
        {
            direct_wout.resize(nnet_inputsize,total_output_size);
            direct_bout.resize(0); // Because it is not used

            fillWeights(direct_wout);
                
            gradient_direct_wout.resize(nnet_inputsize,total_output_size);
            gradient_direct_wout.clear();
            gradient_direct_bout.resize(0); // idem
        }
    }
    else
    {
        wout.resize(nnet_inputsize,total_output_size);
        bout.resize(total_output_size);

        gradient_wout.resize(nnet_inputsize,total_output_size);
        gradient_bout.resize(total_output_size);
        gradient_wout.clear();
        gradient_bout.clear();
    }

    //fillWeights(wout);
    
    if (fixed_output_weights) {
        static Vec values;
        if (values.size()==0)
        {
            values.resize(2);
            values[0]=-1;
            values[1]=1;
        }
        rgen->fill_random_discrete(wout.toVec(), values);
    }
    else 
        fillWeights(wout);

    bout.clear();

    gradient_outputv.resize(total_output_size);
    gradient_act_outputv.resize(total_output_size);
    gradient_outputv.clear();
    gradient_act_outputv.clear();
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Transforms a shallow copy into a deep copy.

Reimplemented from PLearn::PLearner.

Definition at line 2320 of file NeuralProbabilisticLanguageModel.cc.

References b1, b2, bout, bout_dist_rep, cost_funcs, PLearn::deepCopyField(), densities, direct_bout, direct_wout, feat_input, feat_sets, feats, feats_since_last_update, generated_samples, gradient, gradient_act_hidden2v, gradient_act_hiddenv, gradient_act_outputv, gradient_b1, gradient_b2, gradient_bout, gradient_bout_dist_rep, gradient_direct_bout, gradient_direct_wout, gradient_feat_input, gradient_hidden2v, gradient_hiddenv, gradient_last_layer, gradient_nnet_input, gradient_outputv, gradient_w1, gradient_w2, gradient_wout, gradient_wout_dist_rep, hidden2v, hiddenv, importance_sampling_ratios, last_layer, PLearn::PLearner::makeDeepCopyFromShallowCopy(), neg_energies, nnet_input, output_comp, PLERROR, proposal_distribution, rgen, row, sample, target_values, target_values_reference_set, target_values_since_last_update, val_string_reference_set, w1, w2, wout, and wout_dist_rep.

{
    inherited::makeDeepCopyFromShallowCopy(copies);

    // Private variables
    deepCopyField(target_values,copies);
    deepCopyField(output_comp,copies);
    deepCopyField(row,copies);
    deepCopyField(last_layer,copies);
    deepCopyField(gradient_last_layer,copies);
    deepCopyField(feats,copies);
    deepCopyField(gradient,copies);
    deepCopyField(neg_energies,copies);
    deepCopyField(densities,copies);

    // Protected variables
    deepCopyField(feat_input,copies);
    deepCopyField(gradient_feat_input,copies);
    deepCopyField(nnet_input,copies);
    deepCopyField(gradient_nnet_input,copies);
    deepCopyField(hiddenv,copies);
    deepCopyField(gradient_hiddenv,copies);
    deepCopyField(gradient_act_hiddenv,copies);
    deepCopyField(hidden2v,copies);
    deepCopyField(gradient_hidden2v,copies);
    deepCopyField(gradient_act_hidden2v,copies);
    deepCopyField(gradient_outputv,copies);
    deepCopyField(gradient_act_outputv,copies);
    deepCopyField(rgen,copies);
    deepCopyField(feats_since_last_update,copies);
    deepCopyField(target_values_since_last_update,copies);
    deepCopyField(val_string_reference_set,copies);
    deepCopyField(target_values_reference_set,copies);
    deepCopyField(importance_sampling_ratios,copies);
    deepCopyField(sample,copies);
    deepCopyField(generated_samples,copies);

    // Public variables
    deepCopyField(w1,copies);
    deepCopyField(gradient_w1,copies);
    deepCopyField(b1,copies);
    deepCopyField(gradient_b1,copies);
    deepCopyField(w2,copies);
    deepCopyField(gradient_w2,copies);
    deepCopyField(b2,copies);
    deepCopyField(gradient_b2,copies);
    deepCopyField(wout,copies);
    deepCopyField(gradient_wout,copies);
    deepCopyField(bout,copies);
    deepCopyField(gradient_bout,copies);
    deepCopyField(direct_wout,copies);
    deepCopyField(gradient_direct_wout,copies);
    deepCopyField(direct_bout,copies);
    deepCopyField(gradient_direct_bout,copies);
    deepCopyField(wout_dist_rep,copies);
    deepCopyField(gradient_wout_dist_rep,copies);
    deepCopyField(bout_dist_rep,copies);
    deepCopyField(gradient_bout_dist_rep,copies);

    // Public build options
    deepCopyField(cost_funcs,copies);
    deepCopyField(feat_sets,copies);
    deepCopyField(proposal_distribution,copies);

    PLERROR("not up to date");
}

Here is the call graph for this function:

int PLearn::NeuralProbabilisticLanguageModel::my_argmax ( const Vec vec,
int  default_compare = 0 
) const [private]

Argmax function that lets you define the default (first) component used for comparisons.

This is useful to avoid bias in the prediction when the getValues() provides some information about the prior distribution of the targets (e.g. the first target given by getValues() is the most likely) and the output of the model is the same for all targets.

Definition at line 974 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::TVec< T >::data(), i, PLearn::TVec< T >::length(), and PLERROR.

Referenced by computeOutput().

{
#ifdef BOUNDCHECK
    if(vec.length()==0)
        PLERROR("IN int argmax(const TVec<T>& vec) vec has zero length");
#endif
    real* v = vec.data();
    int indexmax = default_compare;
    real maxval = v[default_compare];
    for(int i=0; i<vec.length(); i++)
        if(v[i]>maxval)
        {
            maxval = v[i];
            indexmax = i;
        }
    return indexmax;
}

Here is the call graph for this function:

Here is the caller graph for this function:

real PLearn::NeuralProbabilisticLanguageModel::nll ( const Vec outputv,
int  target 
) const [private]

Negative log-likelihood loss.

Definition at line 2169 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::safeflog().

Referenced by fpropCostsFromOutput().

{
    return -safeflog(outputv[target]);
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::output_gradient_verification ( Vec  grad,
Vec  est_grad 
) [protected]

Definition at line 3046 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::apply(), PLearn::argmax(), PLearn::dot(), PLearn::endl(), PLearn::FABS(), PLearn::fast_exact_is_equal(), i, PLearn::is_missing(), PLearn::TVec< T >::length(), PLearn::max(), MISSING_VALUE, and PLearn::norm().

Referenced by verify_gradient().

{
    // Inspired from Func::verifyGradient()

    Vec num = apply(grad - est_grad,(tRealFunc)FABS);
    Vec denom = real(0.5)*apply(grad + est_grad,(tRealFunc)FABS);
    for (int i = 0; i < num.length(); i++)
    {
        if (!fast_exact_is_equal(num[i], 0))
            num[i] /= denom[i];
        else
            if(!fast_exact_is_equal(denom[i],0))
                cout << "at position " << i << " num[i] == 0 but denom[i] = " 
                     << denom[i] << endl;
    }
    int pos = argmax(num);
    cout << max(num) << " (at position " << pos << "/" << num.length()
         << ", computed = " << grad[pos] << " and estimated = "
         << est_grad[pos] << ")" << endl;

    real norm_grad = norm(grad);
    real norm_est_grad = norm(est_grad);
    real cos_angle = fast_exact_is_equal(norm_grad*norm_est_grad,
                                         0)
        ? MISSING_VALUE
        : dot(grad,est_grad) /
        (norm_grad*norm_est_grad);
    if (cos_angle > 1)
        cos_angle = 1;      // Numerical imprecisions can lead to such situation.
    cout << "grad.length() = " << grad.length() << endl;
    cout << "cos(angle) : " << cos_angle << endl;
    cout << "angle : " << ( is_missing(cos_angle) ? MISSING_VALUE
                            : acos(cos_angle) ) << endl;
}

Here is the call graph for this function:

Here is the caller graph for this function:

int PLearn::NeuralProbabilisticLanguageModel::outputsize ( ) const [virtual]

SUBCLASS WRITING: override this so that it returns the size of this learner's output, as a function of its inputsize(), targetsize() and set options.

Implements PLearn::PLearner.

Definition at line 2390 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::PLearner::targetsize_.

                                                       {
    return targetsize_;
}
VMat PLearn::NeuralProbabilisticLanguageModel::processDataSet ( VMat  dataset) const [protected, virtual]

Changes the reference_set and then calls the parent's class method.

Reimplemented from PLearn::PLearner.

Definition at line 3116 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::PLearner::processDataSet(), target_values_reference_set, PLearn::PLearner::train_set, val_string_reference_set, and PLearn::VMat::width().

{
    VMat ret;
    val_string_reference_set = dataset;
    // Assumes it contains the target part information
    if(dataset->width() > train_set->inputsize())
        target_values_reference_set = dataset;
    ret = inherited::processDataSet(dataset);
    val_string_reference_set = train_set;
    if(dataset->width() > train_set->inputsize())
        target_values_reference_set = train_set;
    return ret;
}

Here is the call graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::test ( VMat  testset,
PP< VecStatsCollector test_stats,
VMat  testoutputs = 0,
VMat  testcosts = 0 
) const [protected, virtual]

Changes the reference_set and then calls the parent's class method.

Reimplemented from PLearn::PLearner.

Definition at line 3105 of file NeuralProbabilisticLanguageModel.cc.

References target_values_reference_set, PLearn::PLearner::test(), PLearn::PLearner::train_set, and val_string_reference_set.

Here is the call graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::train ( ) [virtual]

*** SUBCLASS WRITING: ***

The role of the train method is to bring the learner up to stage==nstages, updating the stats with training costs measured on-line in the process.

TYPICAL CODE:

  static Vec input;  // static so we don't reallocate/deallocate memory each time...
  static Vec target; // (but be careful that static means shared!)
  input.resize(inputsize());    // the train_set's inputsize()
  target.resize(targetsize());  // the train_set's targetsize()
  real weight;
  
  if(!train_stats)   // make a default stats collector, in case there's none
      train_stats = new VecStatsCollector();
  
  if(nstages<stage)  // asking to revert to a previous stage!
      forget();      // reset the learner to stage=0
  
  while(stage<nstages)
  {
      // clear statistics of previous epoch
      train_stats->forget(); 
            
      //... train for 1 stage, and update train_stats,
      // using train_set->getSample(input, target, weight);
      // and train_stats->update(train_costs)
          
      ++stage;
      train_stats->finalize(); // finalize statistics for this epoch
  }

Implements PLearn::PLearner.

Definition at line 2397 of file NeuralProbabilisticLanguageModel.cc.

References b1, b2, batch_size, bout, bout_dist_rep, bprop(), classname(), PLearn::TVec< T >::data(), decrease_constant, direct_in_to_out, direct_wout, dist_rep_dim, PLearn::endl(), fprop(), PLearn::VMat::getExample(), getTrainCostNames(), gradient_b1, gradient_b2, gradient_bout, gradient_bout_dist_rep, gradient_direct_wout, gradient_last_layer, gradient_w1, gradient_w2, gradient_wout, gradient_wout_dist_rep, i, importance_sampling_gradient_update(), importance_sampling_ratios, PLearn::VMat::length(), minimum_effective_sample_size, nhidden, nhidden2, PLearn::PLearner::nstages, PLERROR, pval1, PLearn::PLearner::report_progress, PLearn::TVec< T >::resize(), sampling_block_size, PLearn::PLearner::stage, start_learning_rate, stochastic_gradient_descent_speedup, PLearn::TVec< T >::subVec(), PLearn::tostring(), total_output_size, total_updates, PLearn::PLearner::train_set, PLearn::PLearner::train_stats, update(), PLearn::PLearner::verbosity, w1, w2, wout, and wout_dist_rep.

{
    //Profiler::activate();
    if(!train_set)
        PLERROR("In NeuralProbabilisticLanguageModel::train, "
                "you did not setTrainingSet");

    if(!train_stats)
        PLERROR("In NeuralProbabilisticLanguageModel::train, "
                "you did not setTrainStatsCollector");
 
    Vec outputv(total_output_size);
    Vec costsv(getTrainCostNames().length());
    Vec inputv(train_set->inputsize());
    Vec targetv(train_set->targetsize());
    real sample_weight = 1;

    int l = train_set->length();  
    int bs = batch_size>0 ? batch_size : l;

    // Importance sampling speedup variables
    
    // Effective sample size statistics
    real effective_sample_size_sum = 0;
    real effective_sample_size_square_sum = 0;
    real importance_sampling_ratio_k = 0;
    // Current true sample size;
    int n_samples = 0;

    real 

    PP<ProgressBar> pb;
    if(report_progress)
        pb = new ProgressBar("Training " + classname() + " from stage " 
                             + tostring(stage) + " to " 
                             + tostring(nstages), nstages-stage);

    //if(stage == 0)
    //{
    //    for(int t=0; t<l;t++)
    //    {
    //        cout << "t=" << t << " ";
    //        train_set->getExample(t,inputv,targetv,sample_weight);
    //        row.subVec(0,inputsize_) << inputv;
    //        train_set->getValues(row,inputsize_,target_values);
    //        if(target_values.length() != 1)
    //            verify_gradient(inputv,targetv,1e-6);
    //    }
    //    return;
    //}

    Mat old_gradient_wout;
    Vec old_gradient_bout;
    Mat old_gradient_wout_dist_rep;
    Vec old_gradient_bout_dist_rep;
    Mat old_gradient_w1;
    Vec old_gradient_b1;
    Mat old_gradient_w2;
    Vec old_gradient_b2;
    Mat old_gradient_direct_wout;

    if(stochastic_gradient_descent_speedup)
    {
        // Trick to make stochastic gradient descent faster

        old_gradient_wout = gradient_wout;
        old_gradient_bout = gradient_bout;
        gradient_wout = wout;
        gradient_bout = bout;
        
        if(dist_rep_dim > 0)
        {
            old_gradient_wout_dist_rep = gradient_wout_dist_rep;
            old_gradient_bout_dist_rep = gradient_bout_dist_rep;
            gradient_wout_dist_rep = wout_dist_rep;
            gradient_bout_dist_rep = bout_dist_rep;
        }

        if(nhidden>0) 
        {
            old_gradient_w1 = gradient_w1;
            old_gradient_b1 = gradient_b1;
            gradient_w1 = w1;
            gradient_b1 = b1;
            if(nhidden2>0) 
            {
                old_gradient_w2 = gradient_w2;
                old_gradient_b2 = gradient_b2;
                gradient_w2 = w2;
                gradient_b2 = b2;
            }
            
            if(direct_in_to_out)
            {
                old_gradient_direct_wout = gradient_direct_wout;
                gradient_direct_wout = direct_wout;
            }
        }
    }

    int initial_stage = stage;
    while(stage<nstages)
    {
        for(int t=0; t<l;)
        {
            //if(t%1000 == 0)
            //{
            //    cout << "Time: " << clock()/CLOCKS_PER_SEC << " seconds." << endl;
            //}
            for(int i=0; i<bs; i++)
            {
                //if(t == 71705)
                //    cout << "It's going to fuck !!!" << endl;
                
                //if(t == 71704)
                //    cout << "It's going to fuck !!!" << endl;
                
                train_set->getExample(t%l,inputv,targetv,sample_weight);

                if(proposal_distributions)
                {
                    n_samples = 0;
                    importance_sampling_ratios.resize(0);
                    effective_sample_size_sum = 0;
                    effective_sample_size_square_sum = 0;                    
                    while(effective_sample_size < minimum_effective_sample_size)
                    {
                        if(n_samples >= total_output_size)
                        {
                            gradient_last_layer.resize(total_output_size);
                            
                            fprop(inputv,outputv,targetv,costsv,sample_weight);
                            bprop(inputv,outputv,targetv,costsv,
                                  start_learning_rate/
                                  (bs*(1.0+decrease_constant*total_updates)),
                                  sample_weight);
                            train_stats->update(costsv);
                            break;
                        }
                        
                        importance_sampling_gradient_update(
                            inputv,targetv,
                            start_learning_rate/
                            (bs*(1.0+decrease_constant*total_updates)),
                            sampling_block_size,
                            sampleweight
                            );

                        // Update effective sample size
                        pval1 = importance_sampling_ratios.subVec(
                            nsamples,sampling_block_size).data();
                        for(int k=0; k<sampling_block_size; k++)
                        {                            
                            effective_sample_size_sum += *pval1;
                            effective_sample_size_square_sum += *pval1 * (*pval1);
                            pval1++;
                        }
                        
                        effective_sample_size = 
                            (effective_sample_size_sum*effective_sample_size_sum)/
                            effective_sample_size_square_sum;
                        n_samples += sampling_block_size;
                    }
                }
                else
                {
                    //Profiler::start("fprop()");
                    fprop(inputv,outputv,targetv,costsv,sample_weight);
                    //Profiler::end("fprop()");
                    //Profiler::start("bprop()");
                    bprop(inputv,outputv,targetv,costsv,
                          start_learning_rate/
                          (bs*(1.0+decrease_constant*total_updates)),
                          sample_weight);
                    //Profiler::end("bprop()");
                    train_stats->update(costsv);
                }
                t++;
            }
            // Update
            if(!stochastic_gradient_descent_speedup)
                update();
            total_updates++;
        }
        train_stats->finalize();
        ++stage;
        if(verbosity>2)
            cout << "Epoch " << stage << " train objective: " 
                 << train_stats->getMean() << endl;
        if(pb) pb->update(stage-initial_stage);
    }

    if(stochastic_gradient_descent_speedup)
    {
        // Trick to make stochastic gradient descent faster

        gradient_wout = old_gradient_wout;
        gradient_bout = old_gradient_bout;
        
        if(dist_rep_dim > 0)
        {
            gradient_wout_dist_rep = old_gradient_wout_dist_rep;
            gradient_bout_dist_rep = old_gradient_bout_dist_rep;
        }

        if(nhidden>0) 
        {
            gradient_w1 = old_gradient_w1;
            gradient_b1 = old_gradient_b1;
            if(nhidden2>0) 
            {
                gradient_w2 = old_gradient_w2;
                gradient_b2 = old_gradient_b2;
            }
            
            if(direct_in_to_out)
            {
                gradient_direct_wout = old_gradient_direct_wout;
            }
        }
    }
    //Profiler::report(cout);
}

Here is the call graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::update ( ) [protected]

Update network's parameters.

Definition at line 775 of file NeuralProbabilisticLanguageModel.cc.

References b1, b2, bout, bout_dist_rep, direct_bout, direct_in_to_out, direct_wout, dist_rep_dim, feats_since_last_update, gradient_b1, gradient_b2, gradient_bout, gradient_bout_dist_rep, gradient_direct_bout, gradient_direct_wout, gradient_w1, gradient_w2, gradient_wout, gradient_wout_dist_rep, nhidden, nhidden2, possible_targets_vary, PLearn::TVec< T >::resize(), target_values_since_last_update, update_affine_transform(), w1, w2, wout, and wout_dist_rep.

Referenced by importance_sampling_gradient_update(), and train().

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::update_affine_transform ( Vec  input,
Mat  weights,
Vec  bias,
Mat  gweights,
Vec  gbias,
bool  input_is_sparse,
bool  output_is_sparse,
Vec  output_indices 
) [protected]

Update affine transformation's parameters.

Definition at line 825 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::TMat< T >::data(), PLearn::TVec< T >::data(), i, PLearn::TMat< T >::isCompact(), j, PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), ni, nj, PLERROR, pval1, pval2, pval3, and PLearn::TMat< T >::width().

Referenced by update().

{
    // Bias
    if(bias.length() != 0)
    {
        if(output_is_sparse)
        {
            pval1 = gbias.data();
            pval2 = bias.data();
            pval3 = output_indices.data();
            ni = output_indices.length();
            for(int i=0; i<ni; i++)
            {
                pval2[(int)*pval3] += pval1[(int)*pval3];
                pval1[(int)*pval3] = 0;
                pval3++;
            }
        }
        else
        {
            pval1 = gbias.data();
            pval2 = bias.data();
            ni = bias.length();
            for(int i=0; i<ni; i++)
            {
                *pval2 += *pval1;
                *pval1 = 0;
                pval1++; 
                pval2++;
            }
        }
    }

    // Weights
    if(!input_is_sparse && !output_is_sparse)
    {
        if(!gweights.isCompact() || !weights.isCompact())
            PLERROR("In NeuralProbabilisticLanguageModel::"
                    "update_affine_transform(): weights or gweights is"
                    "not a compact TMat");
        ni = weights.length();
        nj = weights.width();
        pval1 = gweights.data();
        pval2 = weights.data();
        for(int i=0; i<ni; i++)
            for(int j=0; j<nj; j++)
            {
                *pval2 += *pval1;
                *pval1 = 0;
                pval1++;
                pval2++;
            }
    }
    else if(!input_is_sparse && output_is_sparse)
    {
        ni = output_indices.length();
        nj = input.length();
        pval3 = output_indices.data();
        for(int i=0; i<ni; i++)
        {
            for(int j=0; j<nj; j++)
            {
                weights(j,(int)*pval3) += gweights(j,(int)*pval3);
                gweights(j,(int)*pval3) = 0;
            }
            pval3++;
        }
    }
    else if(input_is_sparse && !output_is_sparse)
    {
        ni = input.length();
        nj = weights.width();
        pval3 = input.data();
        for(int i=0; i<ni; i++)
        {
            pval1 = gweights[(int)(*pval3)];
            pval2 = weights[(int)(*pval3++)];
            for(int j=0; j<nj;j++)
            {
                *pval2 += *pval1;
                *pval1 = 0;
                pval1++;
                pval2++;
            }
        }
    }
    else if(input_is_sparse && output_is_sparse)
    {
        // Weights
        ni = input.length();
        nj = output_indices.length();
        pval2 = input.data();
        for(int i=0; i<ni; i++)
        {
            pval3 = output_indices.data();
            for(int j=0; j<nj; j++)
            {
                weights((int)(*pval2),(int)*pval3) += 
                    gweights((int)(*pval2),(int)*pval3);
                gweights((int)(*pval2),(int)*pval3) = 0;
                pval3++;
            }
            pval2++;
        }
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::use ( VMat  testset,
VMat  outputs 
) const [protected, virtual]

Changes the reference_set and then calls the parent's class method.

Reimplemented from PLearn::PLearner.

Definition at line 3093 of file NeuralProbabilisticLanguageModel.cc.

References target_values_reference_set, PLearn::PLearner::train_set, PLearn::PLearner::use(), val_string_reference_set, and PLearn::VMat::width().

{
    val_string_reference_set = testset;
    if(testset->width() > train_set->inputsize())
        target_values_reference_set = testset;
    target_values_reference_set = testset;
    inherited::use(testset,outputs);
    val_string_reference_set = train_set;
    if(testset->width() > train_set->inputsize())
        target_values_reference_set = train_set;
}

Here is the call graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::verify_gradient ( Vec input,
Vec  target,
real  step 
) [protected]

Verify gradient of propagation path.

Definition at line 2621 of file NeuralProbabilisticLanguageModel.cc.

References b1, b2, bout, bout_dist_rep, bprop(), PLearn::TMat< T >::clear(), PLearn::TVec< T >::clear(), clearProppathGradient(), direct_bout, direct_in_to_out, direct_wout, dist_rep_dim, PLearn::endl(), feat_input, feats, fprop(), getTrainCostNames(), gradient_b1, gradient_b2, gradient_bout, gradient_bout_dist_rep, gradient_direct_bout, gradient_direct_wout, gradient_w1, gradient_w2, gradient_wout, gradient_wout_dist_rep, hidden2v, hiddenv, i, ifeats, PLearn::PLearner::inputsize_, j, last_layer, n_feat_sets, nfeats, nhidden, nhidden2, nnet_input, output_comp, output_gradient_verification(), possible_targets_vary, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), PLearn::TVec< T >::subVec(), target_values, total_feats_per_token, total_output_size, PLearn::TMat< T >::toVec(), verify_gradient_affine_transform(), w1, w2, wout, and wout_dist_rep.

{
    Vec costsv(getTrainCostNames().length());
    real sampleweight = 1;
    real verify_step = step;
    
    // To avoid the interaction between fprop and this function
    int nfeats = 0;
    int id = 0;
    int ifeats = 0;

    Vec est_gradient_bout;
    Mat est_gradient_wout;
    Vec est_gradient_bout_dist_rep;
    Mat est_gradient_wout_dist_rep;
    Vec est_gradient_b1;
    Mat est_gradient_w1;
    Vec est_gradient_b2;
    Mat est_gradient_w2;
    Vec est_gradient_direct_bout;
    Mat est_gradient_direct_wout;

    int nnet_inputsize;
    if(dist_rep_dim > 0)
    {
        nnet_inputsize = dist_rep_dim*inputsize_/n_feat_sets;
        est_gradient_wout_dist_rep.resize(total_feats_per_token,dist_rep_dim);
        est_gradient_bout_dist_rep.resize(dist_rep_dim);
        est_gradient_wout_dist_rep.clear();
        est_gradient_bout_dist_rep.clear();
        gradient_wout_dist_rep.clear();
        gradient_bout_dist_rep.clear();
    }
    else
    {
        nnet_inputsize = total_feats_per_token*inputsize_/n_feat_sets;
    }
    
    if(nhidden>0) 
    {
        est_gradient_w1.resize(nnet_inputsize,nhidden);
        est_gradient_b1.resize(nhidden);
        est_gradient_w1.clear();
        est_gradient_b1.clear();
        gradient_w1.clear();
        gradient_b1.clear();
        if(nhidden2>0) 
        {
            est_gradient_w2.resize(nhidden,nhidden2);
            est_gradient_b2.resize(nhidden2);
            est_gradient_wout.resize(nhidden2,total_output_size);
            est_gradient_bout.resize(total_output_size);
            est_gradient_w2.clear();
            est_gradient_b2.clear();
            est_gradient_wout.clear();
            est_gradient_bout.clear();
            gradient_w2.clear();
            gradient_b2.clear();
            gradient_wout.clear();
            gradient_bout.clear();
        }
        else
        {
            est_gradient_wout.resize(nhidden,total_output_size);
            est_gradient_bout.resize(total_output_size);
            est_gradient_wout.clear();
            est_gradient_bout.clear();
            gradient_wout.clear();
            gradient_bout.clear();
        }
            
        if(direct_in_to_out)
        {
            est_gradient_direct_wout.resize(nnet_inputsize,total_output_size);
            est_gradient_direct_wout.clear();
            est_gradient_direct_bout.resize(0); // idem
            gradient_direct_wout.clear();                        
        }
    }
    else
    {
        est_gradient_wout.resize(nnet_inputsize,total_output_size);
        est_gradient_bout.resize(total_output_size);
        est_gradient_wout.clear();
        est_gradient_bout.clear();
        gradient_wout.clear();
        gradient_bout.clear();
    }

    fprop(input, output_comp, targetv, costsv);
    bprop(input,output_comp,targetv,costsv,
          -1, sampleweight);
    clearProppathGradient();
    
    // Compute estimated gradient

    if(dist_rep_dim > 0) 
    {        
        nfeats = 0;
        id = 0;
        for(int i=0; i<inputsize_;)
        {
            ifeats = 0;
            for(int j=0; j<n_feat_sets; j++,i++)
                ifeats += feats[i].length();
            verify_gradient_affine_transform(
                input,output_comp, targetv, costsv, sampleweight,
                feat_input.subVec(nfeats,ifeats),
                wout_dist_rep, bout_dist_rep,
                est_gradient_wout_dist_rep, est_gradient_bout_dist_rep,
                true, false, verify_step);
            nfeats += ifeats;
            id++;
        }

        cout << "Verify wout_dist_rep" << endl;
        output_gradient_verification(gradient_wout_dist_rep.toVec(), 
                                     est_gradient_wout_dist_rep.toVec());
        cout << "Verify bout_dist_rep" << endl;
        output_gradient_verification(gradient_bout_dist_rep, 
                                     est_gradient_bout_dist_rep);
        gradient_wout_dist_rep.clear();
        gradient_bout_dist_rep.clear();

        if(nhidden>0) 
        {
            verify_gradient_affine_transform(
                input,output_comp, targetv, costsv, sampleweight,
                nnet_input,w1,b1,
                est_gradient_w1, est_gradient_b1, false,false, verify_step);

            cout << "Verify w1" << endl;
            output_gradient_verification(gradient_w1.toVec(), 
                                         est_gradient_w1.toVec());
            cout << "Verify b1" << endl;
            output_gradient_verification(gradient_b1, est_gradient_b1);
            
            if(nhidden2>0) 
            {
                verify_gradient_affine_transform(
                    input,output_comp, targetv, costsv, sampleweight,    
                    hiddenv,w2,b2,
                    est_gradient_w2, est_gradient_b2,
                    false,false, verify_step);
                cout << "Verify w2" << endl;
                output_gradient_verification(gradient_w2.toVec(), 
                                             est_gradient_w2.toVec());
                cout << "Verify b2" << endl;
                output_gradient_verification(gradient_b2, est_gradient_b2);

                last_layer = hidden2v;
            }
            else
                last_layer = hiddenv;
        }
        else
            last_layer = nnet_input;

        verify_gradient_affine_transform(
            input,output_comp, targetv, costsv, sampleweight,
            last_layer,wout,bout,
            est_gradient_wout, est_gradient_bout, false,
            possible_targets_vary,verify_step,target_values);

        cout << "Verify wout" << endl;
        output_gradient_verification(gradient_wout.toVec(), 
                                     est_gradient_wout.toVec());
        cout << "Verify bout" << endl;
        output_gradient_verification(gradient_bout, est_gradient_bout);
 
        if(direct_in_to_out && nhidden>0)
        {
            verify_gradient_affine_transform(
                input,output_comp, targetv, costsv, sampleweight,
                nnet_input,direct_wout,direct_bout,
                est_gradient_direct_wout, est_gradient_direct_bout,false,
                possible_targets_vary, verify_step, target_values);
            cout << "Verify direct_wout" << endl;
            output_gradient_verification(gradient_direct_wout.toVec(), 
                                         est_gradient_direct_wout.toVec());
            //cout << "Verify direct_bout" << endl;
            //output_gradient_verification(gradient_direct_bout, est_gradient_direct_bout);
        }
    }
    else
    {        
        if(nhidden>0)
        {
            verify_gradient_affine_transform(
                input,output_comp, targetv, costsv, sampleweight,
                feat_input,w1,b1,
                est_gradient_w1, est_gradient_b1,
                true,false, verify_step);

            cout << "Verify w1" << endl;
            output_gradient_verification(gradient_w1.toVec(), 
                                         est_gradient_w1.toVec());
            cout << "Verify b1" << endl;
            output_gradient_verification(gradient_b1, est_gradient_b1);

            if(nhidden2>0)
            {
                verify_gradient_affine_transform(
                    input,output_comp, targetv, costsv, sampleweight,
                    hiddenv,w2,b2,
                    est_gradient_w2, est_gradient_b2,true,false,
                    verify_step);

                cout << "Verify w2" << endl;
                output_gradient_verification(gradient_w2.toVec(), 
                                             est_gradient_w2.toVec());
                cout << "Verify b2" << endl;
                output_gradient_verification(gradient_b2, est_gradient_b2);
                
                last_layer = hidden2v;
            }
            else
                last_layer = hiddenv;
        }
        else
            last_layer = feat_input;
        
        verify_gradient_affine_transform(
            input,output_comp, targetv, costsv, sampleweight,
            last_layer,wout,bout,
            est_gradient_wout, est_gradient_bout, nhidden<=0,
            possible_targets_vary,verify_step, target_values);

        cout << "Verify wout" << endl;
        output_gradient_verification(gradient_wout.toVec(), 
                                     est_gradient_wout.toVec());
        cout << "Verify bout" << endl;
        output_gradient_verification(gradient_bout, est_gradient_bout);
        
        if(direct_in_to_out && nhidden>0)
        {
            verify_gradient_affine_transform(
                input,output_comp, targetv, costsv, sampleweight,
                feat_input,direct_wout,direct_bout,
                est_gradient_wout, est_gradient_bout,true,
                possible_targets_vary, verify_step,target_values);
            cout << "Verify direct_wout" << endl;
            output_gradient_verification(gradient_direct_wout.toVec(), 
                                         est_gradient_direct_wout.toVec());
            cout << "Verify direct_bout" << endl;
            output_gradient_verification(gradient_direct_bout, 
                                         est_gradient_direct_bout);
        }
    }

}

Here is the call graph for this function:

void PLearn::NeuralProbabilisticLanguageModel::verify_gradient_affine_transform ( Vec  global_input,
Vec global_output,
Vec global_targetv,
Vec global_costs,
real  sampleweight,
Vec  input,
Mat  weights,
Vec  bias,
Mat  est_gweights,
Vec  est_gbias,
bool  input_is_sparse,
bool  output_is_sparse,
real  step,
Vec  output_indices = Vec(0) 
) const [protected]

Verify gradient of affine_transform parameters.

Definition at line 2874 of file NeuralProbabilisticLanguageModel.cc.

References PLearn::TVec< T >::data(), fprop(), i, j, PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), ni, nj, pval1, pval2, pval3, and PLearn::TMat< T >::width().

Referenced by verify_gradient().

{
    real *pval1, *pval2, *pval3;
    int ni,nj;
    real out1,out2;
    // Bias
    if(bias.length() != 0)
    {
        if(output_is_sparse)
        {
            pval1 = est_gbias.data();
            pval2 = bias.data();
            pval3 = output_indices.data();
            ni = output_indices.length();
            for(int i=0; i<ni; i++)
            {
                pval2[(int)*pval3] += step;
                fprop(global_input, global_output, global_targetv, 
                      global_costs, sampleweight);
                out1 = global_costs[0];
                pval2[(int)*pval3] -= 2*step;
                fprop(global_input, global_output, global_targetv, 
                      global_costs, sampleweight);
                out2 = global_costs[0];
                pval1[(int)*pval3] = (out1-out2)/(2*step);
                pval2[(int)*pval3] += step;
                pval3++;
            }
        }
        else
        {
            pval1 = est_gbias.data();
            pval2 = bias.data();
            ni = bias.length();
            for(int i=0; i<ni; i++)
            {
                *pval2 += step;
                fprop(global_input, global_output, global_targetv, 
                      global_costs, sampleweight);
                out1 = global_costs[0];
                *pval2 -= 2*step;
                fprop(global_input, global_output, global_targetv, 
                      global_costs, sampleweight);
                out2 = global_costs[0];
                *pval1 = (out1-out2)/(2*step);
                *pval2 += step;
                pval1++; 
                pval2++;
            }
        }
    }

    // Weights
    if(!input_is_sparse && !output_is_sparse)
    {
        ni = weights.length();
        nj = weights.width();
        for(int i=0; i<ni; i++)
            for(int j=0; j<nj; j++)
            {
                weights(i,j) += step;
                fprop(global_input, global_output, global_targetv, 
                      global_costs, sampleweight);
                out1 = global_costs[0];
                weights(i,j) -= 2*step;
                fprop(global_input, global_output, global_targetv, 
                      global_costs, sampleweight);
                out2 = global_costs[0];
                weights(i,j) += step;
                est_gweights(i,j) = (out1-out2)/(2*step);
            }
    }
    else if(!input_is_sparse && output_is_sparse)
    {
        ni = output_indices.length();
        nj = input.length();
        pval3 = output_indices.data();
        for(int i=0; i<ni; i++)
        {
            for(int j=0; j<nj; j++)
            {
                weights(j,(int)*pval3) += step;
                fprop(global_input, global_output, global_targetv, 
                      global_costs, sampleweight);
                out1 = global_costs[0];
                weights(j,(int)*pval3) -= 2*step;
                fprop(global_input, global_output, global_targetv, 
                      global_costs, sampleweight);
                out2 = global_costs[0];
                weights(j,(int)*pval3) += step;
                est_gweights(j,(int)*pval3) = (out1-out2)/(2*step);
//                if(target_values.length() != 1 && input[j] != 0 && (out1-out2)/(2*step) == 0)
//                {                    
//                    print_what_the_fuck();
//                    weights(j,(int)*pval3) += 1;
//                    fprop(global_input, global_output, global_targetv, global_costs, sampleweight);
//                    weights(j,(int)*pval3) -= 1;
//                    cout << "out1 - global_costs[0] =" << out1-global_costs[0] << endl;
//                }
            }
            pval3++;
        }
    }
    else if(input_is_sparse && !output_is_sparse)
    {
        ni = input.length();
        nj = weights.width();
        if(ni != 0 )
        {
            pval3 = input.data();
            for(int i=0; i<ni; i++)
            {
                pval1 = est_gweights[(int)(*pval3)];
                pval2 = weights[(int)(*pval3++)];
                for(int j=0; j<nj;j++)
                {
                    *pval2 += step;
                    fprop(global_input, global_output, global_targetv, 
                          global_costs, sampleweight);
                    out1 = global_costs[0];
                    *pval2 -= 2*step;
                    fprop(global_input, global_output, global_targetv, 
                          global_costs, sampleweight);
                    out2 = global_costs[0];
                    *pval1 = (out1-out2)/(2*step);
                    *pval2 += step;
                    pval1++;
                    pval2++;
                }
            }
        }
    }
    else if(input_is_sparse && output_is_sparse)
    {
        // Weights
        ni = input.length();
        nj = output_indices.length();
        if(ni != 0)
        {
            pval2 = input.data();
            for(int i=0; i<ni; i++)
            {
                pval3 = output_indices.data();
                for(int j=0; j<nj; j++)
                {
                    weights((int)(*pval2),(int)*pval3) += step;
                    fprop(global_input, global_output, global_targetv, 
                          global_costs, sampleweight);
                    out1 = global_costs[0];
                    weights((int)(*pval2),(int)*pval3) -= 2*step;
                    fprop(global_input, global_output, global_targetv, 
                          global_costs, sampleweight);
                    out2 = global_costs[0];
                    est_gweights((int)(*pval2),(int)*pval3)  = 
                        (out1-out2)/(2*step);
                    weights((int)(*pval2),(int)*pval3) += step;
                    pval3++;
                }
                pval2++;
            }
        }
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:


Member Data Documentation

Reimplemented from PLearn::PLearner.

Definition at line 311 of file NeuralProbabilisticLanguageModel.h.

Number of samples to use to estimate gradient before an update.

0 means the whole training set (default: 1)

Definition at line 255 of file NeuralProbabilisticLanguageModel.h.

Referenced by build_(), declareOptions(), and train().

Decrease constant of gradietn descent.

Definition at line 252 of file NeuralProbabilisticLanguageModel.h.

Referenced by declareOptions(), and train().

If true then direct input to output weights will be added (if nhidden > 0)

Definition at line 238 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), declareOptions(), fpropOutput(), getNegativeEnergyValues(), importance_sampling_gradient_update(), initializeParams(), train(), update(), and verify_gradient().

Weight decay for weights from input directly to output layer (default:0)

Definition at line 223 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), declareOptions(), and importance_sampling_gradient_update().

Dimensionality (number of components) of distributed representations If <= 0, than distributed representations will not be used.

Definition at line 263 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), clearProppathGradient(), declareOptions(), fpropBeforeOutputWeights(), fpropOutput(), getNegativeEnergyValues(), importance_sampling_gradient_update(), initializeParams(), train(), update(), and verify_gradient().

Definition at line 88 of file NeuralProbabilisticLanguageModel.h.

Referenced by fpropBeforeOutputWeights().

FeatureSets to apply on input.

Definition at line 268 of file NeuralProbabilisticLanguageModel.h.

Referenced by build_(), declareOptions(), fpropBeforeOutputWeights(), initializeParams(), and makeDeepCopyFromShallowCopy().

Features seen in input since last update.

Definition at line 131 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), makeDeepCopyFromShallowCopy(), and update().

If true then the output weights are not learned.

They are initialized to +1 or -1 randomly (default:false)

Definition at line 235 of file NeuralProbabilisticLanguageModel.h.

Referenced by build_(), declareOptions(), and initializeParams().

Set of generated samples from the proposal distribution.

Definition at line 143 of file NeuralProbabilisticLanguageModel.h.

Referenced by importance_sampling_gradient_update(), and makeDeepCopyFromShallowCopy().

Definition at line 85 of file NeuralProbabilisticLanguageModel.h.

Referenced by gradient_transfer_func().

Temporary computations variable, used in fprop() and bprop() Care must be taken when using these variables, since they are used by many different functions.

Definition at line 82 of file NeuralProbabilisticLanguageModel.h.

Referenced by makeDeepCopyFromShallowCopy().

Gradient through second hidden layer activation.

Definition at line 123 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), clearProppathGradient(), importance_sampling_gradient_update(), initializeParams(), and makeDeepCopyFromShallowCopy().

Gradient through first hidden layer activation.

Definition at line 117 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), clearProppathGradient(), importance_sampling_gradient_update(), initializeParams(), and makeDeepCopyFromShallowCopy().

Gradient throught output layer activation.

Definition at line 127 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), clearProppathGradient(), initializeParams(), and makeDeepCopyFromShallowCopy().

Gradient on bias of output layer for distributed representation predictor.

Definition at line 189 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), importance_sampling_gradient_update(), initializeParams(), makeDeepCopyFromShallowCopy(), train(), update(), and verify_gradient().

Gradient on direct input to output bias (empty, since no bias is used)

Definition at line 177 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), importance_sampling_gradient_update(), initializeParams(), makeDeepCopyFromShallowCopy(), update(), and verify_gradient().

Gradient on direct input to output weights.

Definition at line 173 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), importance_sampling_gradient_update(), initializeParams(), makeDeepCopyFromShallowCopy(), train(), update(), and verify_gradient().

Gradient on feature input (useless for now)

Definition at line 107 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), importance_sampling_gradient_update(), and makeDeepCopyFromShallowCopy().

Gradient of last layer in back propagation.

Definition at line 75 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), importance_sampling_gradient_update(), makeDeepCopyFromShallowCopy(), and train().

Gradient on output.

Definition at line 125 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), clearProppathGradient(), initializeParams(), and makeDeepCopyFromShallowCopy().

Gradient on weights of output layer for distributed representation predictor.

Definition at line 183 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), importance_sampling_gradient_update(), initializeParams(), makeDeepCopyFromShallowCopy(), train(), update(), and verify_gradient().

Transfer function to use for hidden units (default:"tanh") tanh, sigmoid, softplus, softmax, etc...

Definition at line 246 of file NeuralProbabilisticLanguageModel.h.

Referenced by add_transfer_func(), declareOptions(), and gradient_transfer_func().

Definition at line 87 of file NeuralProbabilisticLanguageModel.h.

Importance sampling ratios of the samples.

Definition at line 139 of file NeuralProbabilisticLanguageModel.h.

Referenced by importance_sampling_gradient_update(), makeDeepCopyFromShallowCopy(), and train().

Method of initialization for neural network's weights.

Definition at line 260 of file NeuralProbabilisticLanguageModel.h.

Referenced by declareOptions(), and fillWeights().

Last layer of network (pointer to either nnet_input, vnhidden or vnhidden2)

Definition at line 73 of file NeuralProbabilisticLanguageModel.h.

Referenced by fpropBeforeOutputWeights(), fpropOutput(), getNegativeEnergyValues(), makeDeepCopyFromShallowCopy(), and verify_gradient().

Bias decay for weights from input layer to first hidden layer (default:0)

Definition at line 208 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), declareOptions(), and importance_sampling_gradient_update().

Weight decay for weights from input layer to first hidden layer (default:0)

Definition at line 205 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), declareOptions(), and importance_sampling_gradient_update().

Bias decay for weights from first hidden layer to second hidden layer (default:0)

Definition at line 214 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), declareOptions(), and importance_sampling_gradient_update().

Weight decay for weights from first hidden layer to second hidden layer (default:0)

Definition at line 211 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), declareOptions(), and importance_sampling_gradient_update().

Margin requirement, used only with the margin_perceptron_cost cost function (default:1)

Definition at line 232 of file NeuralProbabilisticLanguageModel.h.

Minimum effective sample size.

Definition at line 282 of file NeuralProbabilisticLanguageModel.h.

Referenced by declareOptions(), and train().

Definition at line 87 of file NeuralProbabilisticLanguageModel.h.

Referenced by gradient_transfer_func().

Definition at line 86 of file NeuralProbabilisticLanguageModel.h.

Referenced by fpropBeforeOutputWeights().

Vector for output computations.

Definition at line 68 of file NeuralProbabilisticLanguageModel.h.

Referenced by build_(), computeOutput(), makeDeepCopyFromShallowCopy(), and verify_gradient().

Bias decay for weights from last hidden layer to output layer (default:0)

Definition at line 220 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), declareOptions(), and importance_sampling_gradient_update().

Bias decay for weights from last hidden layer to output layer of distributed representation predictor (default:0)

Definition at line 229 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), declareOptions(), and importance_sampling_gradient_update().

Weight decay for weights from last hidden layer to output layer of distributed representation predictor (default:0)

Definition at line 226 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), declareOptions(), and importance_sampling_gradient_update().

Weight decay for weights from last hidden layer to output layer (default:0)

Definition at line 217 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), declareOptions(), and importance_sampling_gradient_update().

Transfer function to use for ouput layer (default:"")

Definition at line 243 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), declareOptions(), and fpropOutput().

Penalty to use on the weights (for weight and bias decay) (default:"L2_square")

Definition at line 241 of file NeuralProbabilisticLanguageModel.h.

Referenced by build_(), declareOptions(), gradient_affine_transform(), and gradient_penalty().

Indication that the set of possible targets vary from one input vector to another.

Definition at line 266 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), computeOutput(), declareOptions(), fpropBeforeOutputWeights(), fpropCostsFromOutput(), fpropOutput(), update(), and verify_gradient().

Proposal distribution for importance sampling speedup method (Bengio and Senecal 2006).

If NULL, then this speedup method won't be used. This proposal distribution should use the same symbol int/string mapping as this class uses.

Definition at line 275 of file NeuralProbabilisticLanguageModel.h.

Referenced by build_(), importance_sampling_gradient_update(), and makeDeepCopyFromShallowCopy().

Definition at line 84 of file NeuralProbabilisticLanguageModel.h.

Referenced by gradient_affine_transform().

Definition at line 84 of file NeuralProbabilisticLanguageModel.h.

Referenced by gradient_affine_transform().

Reindexed target.

Definition at line 103 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), clearProppathGradient(), and fpropCostsFromOutput().

Random number generator for parameters initialization.

Definition at line 129 of file NeuralProbabilisticLanguageModel.h.

Referenced by computeOutput(), fillWeights(), initializeParams(), and makeDeepCopyFromShallowCopy().

Generated sample from proposal distribution.

Definition at line 141 of file NeuralProbabilisticLanguageModel.h.

Referenced by build_(), importance_sampling_gradient_update(), and makeDeepCopyFromShallowCopy().

Size of the sampling blocks.

Definition at line 280 of file NeuralProbabilisticLanguageModel.h.

Referenced by declareOptions(), and train().

Start learning rate of gradient descent.

Definition at line 250 of file NeuralProbabilisticLanguageModel.h.

Referenced by declareOptions(), and train().

Indication that a trick to speedup stochastic gradient descent should be used.

Definition at line 258 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), build_(), declareOptions(), importance_sampling_gradient_update(), and train().

Definition at line 83 of file NeuralProbabilisticLanguageModel.h.

Referenced by fpropBeforeOutputWeights().

Possible target values mapping.

Definition at line 137 of file NeuralProbabilisticLanguageModel.h.

Referenced by build_(), fpropBeforeOutputWeights(), makeDeepCopyFromShallowCopy(), processDataSet(), test(), and use().

Possible target values seen since last update.

Definition at line 133 of file NeuralProbabilisticLanguageModel.h.

Referenced by bprop(), makeDeepCopyFromShallowCopy(), and update().

Number of features per input token for which a distributed representation is computed.

Definition at line 101 of file NeuralProbabilisticLanguageModel.h.

Referenced by initializeParams(), and verify_gradient().

Total output size.

Definition at line 93 of file NeuralProbabilisticLanguageModel.h.

Referenced by build_(), initializeParams(), train(), and verify_gradient().

Total updates so far;.

Definition at line 95 of file NeuralProbabilisticLanguageModel.h.

Referenced by forget(), and train().

Indication that the proposal distribution must be trained (using train_set).

Definition at line 278 of file NeuralProbabilisticLanguageModel.h.

Referenced by build_(), and declareOptions().

VMatrix used to get values to string mapping for input tokens.

Definition at line 135 of file NeuralProbabilisticLanguageModel.h.

Referenced by batchComputeOutputAndConfidence(), build_(), fpropBeforeOutputWeights(), makeDeepCopyFromShallowCopy(), processDataSet(), test(), and use().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines