PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Static Protected Member Functions | Protected Attributes | Private Types | Private Member Functions
PLearn::NGramDistribution Class Reference

This class implements an ngram distribution for symbol sequence modeling. More...

#include <NGramDistribution.h>

Inheritance diagram for PLearn::NGramDistribution:
Inheritance graph
[legend]
Collaboration diagram for PLearn::NGramDistribution:
Collaboration graph
[legend]

List of all members.

Public Member Functions

 NGramDistribution ()
 Default constructor.
virtual void build ()
 Simply call inherited::build() then build_().
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transform a shallow copy into a deep copy.
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual NGramDistributiondeepCopy (CopiesMap &copies) const
virtual real log_density (const Vec &x) const
 Return log of probability density log(p(y | x)).
virtual real survival_fn (const Vec &y) const
 Return survival function: P(Y>y | x).
virtual real cdf (const Vec &y) const
 Return cdf: P(Y<y | x).
virtual void expectation (Vec &mu) const
 Return E[Y | x].
virtual void variance (Mat &cov) const
 Return Var[Y | x].
virtual void generate (Vec &y) const
 Return a pseudo-random sample generated from the distribution.
virtual real density (const Vec &y) const
 Return probability density p(y | x)
virtual void forget ()
 (Re-)initializes the PDistribution in its fresh state (that state may depend on the 'seed' option).
virtual void train ()
 For this distribution, won't do anything. Just implemented to work with PTester.

Static Public Member Functions

static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

bool nan_replace
 Replace nan values with -1.
int n
 N in NGram.
real additive_constant
 Additive constant for Add-delta smoothing.
real discount_constant
 Discount constant for absolute discounting smoothing.
string smoothing
 Smoothing parameter.
string lambda_estimation
 Lambda estimation technique.
Vec lambdas
 Lambdas for Jelinek-Mercer smoothing.
PP< NGramTreetree
 NGram tree.

Static Public Attributes

static StaticInitializer _static_initializer_

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declare this class' options.

Protected Attributes

int voc_size

Private Types

typedef PDistribution inherited

Private Member Functions

void build_ ()
 This does the actual building.
void getNGrams (Vec row, TVec< int > &ngram) const
 Takes a row of a VMat and gives the ngram associated.

Detailed Description

This class implements an ngram distribution for symbol sequence modeling.

Definition at line 58 of file NGramDistribution.h.


Member Typedef Documentation

Reimplemented from PLearn::PDistribution.

Definition at line 63 of file NGramDistribution.h.


Constructor & Destructor Documentation

PLearn::NGramDistribution::NGramDistribution ( )

Default constructor.

Definition at line 54 of file NGramDistribution.cc.

References forget(), PLearn::PDistribution::predicted_size, and PLearn::PDistribution::predictor_size.

                                     :
    nan_replace(false),
    n(2),
    additive_constant(0),
    discount_constant(0.01), 
    smoothing("no_smoothing"),
    lambda_estimation("manual")
{
    forget();
    // In a N-Gram, the predicted size is always one.
    predicted_size = 1;
    predictor_size = -1;
}

Here is the call graph for this function:


Member Function Documentation

string PLearn::NGramDistribution::_classname_ ( ) [static]

Reimplemented from PLearn::PDistribution.

Definition at line 73 of file NGramDistribution.cc.

OptionList & PLearn::NGramDistribution::_getOptionList_ ( ) [static]

Reimplemented from PLearn::PDistribution.

Definition at line 73 of file NGramDistribution.cc.

RemoteMethodMap & PLearn::NGramDistribution::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::PDistribution.

Definition at line 73 of file NGramDistribution.cc.

bool PLearn::NGramDistribution::_isa_ ( const Object o) [static]

Reimplemented from PLearn::PDistribution.

Definition at line 73 of file NGramDistribution.cc.

Object * PLearn::NGramDistribution::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::PDistribution.

Definition at line 73 of file NGramDistribution.cc.

StaticInitializer NGramDistribution::_static_initializer_ & PLearn::NGramDistribution::_static_initialize_ ( ) [static]

Reimplemented from PLearn::PDistribution.

Definition at line 73 of file NGramDistribution.cc.

void PLearn::NGramDistribution::build ( ) [virtual]

Simply call inherited::build() then build_().

Reimplemented from PLearn::PDistribution.

Definition at line 154 of file NGramDistribution.cc.

References PLearn::PDistribution::build(), build_(), n, and PLearn::PDistribution::predictor_size.

{
    // now set in the constructor to -1
    predictor_size = n - 1;
    inherited::build();
    build_();
}

Here is the call graph for this function:

void PLearn::NGramDistribution::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::PDistribution.

Definition at line 165 of file NGramDistribution.cc.

References PLearn::PLearner::inputsize(), PLearn::TVec< T >::length(), n, nan_replace, PLERROR, smoothing, PLearn::PLearner::train_set, and voc_size.

Referenced by build().

{
    if(train_set)
    {
        if(inputsize() != n) PLERROR("In NGramDistribution:build_() : input size should be n=%d", n);
        Vec values;
        train_set->getValues(0,n-1,values);
        voc_size = values.length();
        if(voc_size <= 0) PLERROR("In NGramDistribution:build_() : vocabulary size is <= 0");

        if(nan_replace) voc_size++;

        if(smoothing == "absolute-discounting")
        {
            if(discount_constant < 0 || discount_constant > 1)
                PLERROR("In NGramDistribution:build_() : discount constant should be in [0,1]");
        }
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

real PLearn::NGramDistribution::cdf ( const Vec y) const [virtual]

Return cdf: P(Y<y | x).

Reimplemented from PLearn::PDistribution.

Definition at line 188 of file NGramDistribution.cc.

References PLERROR.

{
    PLERROR("cdf not implemented for NGramDistribution"); return 0;
}
string PLearn::NGramDistribution::classname ( ) const [virtual]

Reimplemented from PLearn::PDistribution.

Definition at line 73 of file NGramDistribution.cc.

void PLearn::NGramDistribution::declareOptions ( OptionList ol) [static, protected]

Declare this class' options.

Reimplemented from PLearn::PDistribution.

Definition at line 78 of file NGramDistribution.cc.

References additive_constant, PLearn::OptionBase::buildoption, PLearn::declareOption(), PLearn::PDistribution::declareOptions(), discount_constant, lambda_estimation, lambdas, PLearn::OptionBase::learntoption, n, nan_replace, PLearn::OptionBase::nosave, PLearn::PDistribution::predicted_size, PLearn::PDistribution::predictor_size, PLearn::redeclareOption(), smoothing, tree, PLearn::PLearner::validation_set, and voc_size.

{
    // ### Declare all of this object's options here
    // ### For the "flags" of each option, you should typically specify
    // ### one of OptionBase::buildoption, OptionBase::learntoption or
    // ### OptionBase::tuningoption. Another possible flag to be combined with
    // ### is OptionBase::nosave

    declareOption(ol, "nan_replace", &NGramDistribution::nan_replace, 
                  OptionBase::buildoption,
                  "Indication that the missing values in context (nan) should be\n"
                  "replaced by a default value (-1). nan fields should correspond\n"
                  "to context not accessible (like in the beginning of a sentence).\n"
                  "If this parameter is false, than the shortest ngram is inserted\n"
                  "in the NGramTree."
        );

    declareOption(ol, "n", &NGramDistribution::n, OptionBase::buildoption,
        "Length of the n-gram (this option overrides the inherited options\n"
        "'predictor_size' and 'predicted_size', i.e. predictor_size = n-1\n"
        "and predicted_size = 1.");

    declareOption(ol, "additive_constant", &NGramDistribution::additive_constant, 
                  OptionBase::buildoption,
                  "Additive constant for add-delta smoothing");

    declareOption(ol, "discount_constant", &NGramDistribution::discount_constant, 
                  OptionBase::buildoption,
                  "Discount constant for absolut discounting smoothing");

    declareOption(ol, "smoothing", &NGramDistribution::smoothing, 
                  OptionBase::buildoption,
                  "Smoothing method. Choose among:\n"
                  "- \"no_smoothing\"\n"
                  "- \"add-delta\"\n"
                  "- \"jelinek-mercer\"\n"
                  "- \"witten-bell\"\n"
                  "- \"absolute-discounting\"\n"
        );
    declareOption(ol, "lambda_estimation", &NGramDistribution::lambda_estimation, 
                  OptionBase::buildoption,
                  "Lambdas estimation method. Choose among:\n"
                  "- \"manual\" (lambdas field should be specified)\n"
                  "- \"EM\"\n"
        );
    declareOption(ol, "lambdas", &NGramDistribution::lambdas, 
                  OptionBase::buildoption,
                  "Lambdas of the interpolated ngram");

    declareOption(ol, "validation_set", &NGramDistribution::validation_set, 
                  OptionBase::buildoption,
                  "Validation set used to estimate the lambdas with the\n"
                  "EM algorithm.");

    declareOption(ol, "tree", &NGramDistribution::tree, OptionBase::learntoption,
                  "NGramTree of the frequencies");

    declareOption(ol, "voc_size", &NGramDistribution::voc_size, 
                  OptionBase::learntoption,
                  "Vocabulary size");

    // Now call the parent class' declareOptions().
    inherited::declareOptions(ol);

    redeclareOption(ol, "predictor_size",  &NGramDistribution::predictor_size,
                  OptionBase::nosave,
                  "Defined at build time.");

    redeclareOption(ol, "predicted_size",  &NGramDistribution::predicted_size,
                  OptionBase::nosave,
                  "Defined at build time.");
}

Here is the call graph for this function:

static const PPath& PLearn::NGramDistribution::declaringFile ( ) [inline, static]

Reimplemented from PLearn::PDistribution.

Definition at line 141 of file NGramDistribution.h.

NGramDistribution * PLearn::NGramDistribution::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::PDistribution.

Definition at line 73 of file NGramDistribution.cc.

real PLearn::NGramDistribution::density ( const Vec y) const [virtual]

Return probability density p(y | x)

Reimplemented from PLearn::PDistribution.

Definition at line 227 of file NGramDistribution.cc.

References additive_constant, discount_constant, getNGrams(), i, PLearn::is_missing(), j, lambdas, PLearn::TVec< T >::length(), n, PLearn::norm(), PLERROR, PLearn::PDistribution::predictor_part, smoothing, tree, and voc_size.

Referenced by log_density().

{
    if(is_missing(y[0])) PLERROR("In NGramDistribution:density() : y[0] is missing");

    // Making ngram

    static TVec<int> ngram;

    Vec row(n);
    row[n-1] = y[0];
    for(int i=0; i<n-1; i++)
        row[i] = predictor_part[i];

    getNGrams(row,ngram);

    // Computing P(w_i|w_{i-n+1}^{i-1})

    TVec<int> freq;
    TVec<int> normalization;
    int ngram_length = ngram.length();

    if(smoothing == "no_smoothing")
    {
        freq = tree->freq(ngram);
        normalization = tree->normalization(ngram);
        if(normalization[ngram_length-1] == 0)
            return 1.0/voc_size;
        return ((real)freq[ngram_length-1])/normalization[ngram_length-1];
    }
    else if(smoothing == "add-delta")
    {
        freq = tree->freq(ngram);
        normalization = tree->normalization(ngram);
        return ((real)freq[ngram_length-1] + additive_constant)/(normalization[ngram_length-1] + additive_constant*voc_size);
    }
    else if(smoothing == "jelinek-mercer")
    {
        freq = tree->freq(ngram);
        normalization = tree->normalization(ngram);
        real ret = 1.0/voc_size*lambdas[0];
        real norm = lambdas[0]; // For ngram smaller than n...

        for(int j=0; j<ngram_length;j++)
        {
            if(normalization[j] != 0)
            {
                ret += lambdas[j+1] * (((real)freq[j])/normalization[j]);
                norm += lambdas[j+1];
            }
        }
        return ret/norm;
    }
    else if(smoothing == "absolute-discounting")
    {
        freq = tree->freq(ngram);
        normalization = tree->normalization(ngram);
        TVec<int> n_freq = tree->n_freq(ngram);
        real ret = 0;
        real factor = 1;
        for(int j=ngram_length-1; j>=0; j--)
        {
            if(normalization[j] != 0)
            {
                ret += factor * ((real)(freq[j] > discount_constant ? freq[j] - discount_constant : 0))/ normalization[j];
                factor = factor * ((real)discount_constant)/normalization[j] * n_freq[j];
            }
        }
        ret += factor *1.0/voc_size;

        return ret;
    }
    else if(smoothing == "witten-bell")
    {
        freq = tree->freq(ngram);
        normalization = tree->normalization(ngram);
        TVec<int> n_freq = tree->n_freq(ngram);
        real ret = 1.0/voc_size;
        for(int j=0; j<ngram_length; j++)
        {
            if(normalization[j] != 0)
                ret = (freq[j]+n_freq[j]*ret)/(normalization[j]+n_freq[j]);
        }

        return ret;
    }
    else PLERROR("In NGramDistribution:density() : smoothing technique not valid");
    return 0;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NGramDistribution::expectation ( Vec mu) const [virtual]

Return E[Y | x].

Reimplemented from PLearn::PDistribution.

Definition at line 196 of file NGramDistribution.cc.

References PLERROR.

{
    PLERROR("expectation not implemented for NGramDistribution");
}
void PLearn::NGramDistribution::forget ( ) [virtual]

(Re-)initializes the PDistribution in its fresh state (that state may depend on the 'seed' option).

And sets 'stage' back to 0 (this is the stage of a fresh learner!).

Reimplemented from PLearn::PDistribution.

Definition at line 205 of file NGramDistribution.cc.

References tree.

Referenced by NGramDistribution().

{
    tree = new NGramTree();
}

Here is the caller graph for this function:

void PLearn::NGramDistribution::generate ( Vec y) const [virtual]

Return a pseudo-random sample generated from the distribution.

Reimplemented from PLearn::PDistribution.

Definition at line 213 of file NGramDistribution.cc.

References PLERROR.

{

    PLERROR("generate not implemented for NGramDistribution");
}
void PLearn::NGramDistribution::getNGrams ( Vec  row,
TVec< int > &  ngram 
) const [private]

Takes a row of a VMat and gives the ngram associated.

Definition at line 346 of file NGramDistribution.cc.

References PLearn::is_missing(), j, PLearn::TVec< T >::length(), n, nan_replace, PLERROR, and PLearn::TVec< T >::resize().

Referenced by density(), and train().

{
    if(is_missing(row[row.length()-1])) PLERROR("In getNGrams() : last element of row is NaN");

    int insert_from = 0;
    //Looking for nan
    if(!nan_replace)
        for(int j=0; j<row.length(); j++)
            if(is_missing(row[j]))
                insert_from = j+1;

    ngram.resize(n-insert_from);

    //Making ngram
    for(int j=insert_from; j<row.length(); j++)
    {
        if(is_missing(row[j]))
            ngram[j-insert_from] = -1;
        else
            ngram[j-insert_from] = (int)row[j];
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

OptionList & PLearn::NGramDistribution::getOptionList ( ) const [virtual]

Reimplemented from PLearn::PDistribution.

Definition at line 73 of file NGramDistribution.cc.

OptionMap & PLearn::NGramDistribution::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::PDistribution.

Definition at line 73 of file NGramDistribution.cc.

RemoteMethodMap & PLearn::NGramDistribution::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::PDistribution.

Definition at line 73 of file NGramDistribution.cc.

real PLearn::NGramDistribution::log_density ( const Vec x) const [virtual]

Return log of probability density log(p(y | x)).

Reimplemented from PLearn::PDistribution.

Definition at line 222 of file NGramDistribution.cc.

References density(), and PLearn::safeflog().

{
    return safeflog(density(y));
}

Here is the call graph for this function:

void PLearn::NGramDistribution::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Transform a shallow copy into a deep copy.

Reimplemented from PLearn::PDistribution.

Definition at line 319 of file NGramDistribution.cc.

References PLearn::deepCopyField(), lambdas, PLearn::PDistribution::makeDeepCopyFromShallowCopy(), and tree.

{
    inherited::makeDeepCopyFromShallowCopy(copies);

    deepCopyField(lambdas, copies);
    deepCopyField(tree, copies);

    // ### Remove this line when you have fully implemented this method.
    //PLERROR("NGramDistribution::makeDeepCopyFromShallowCopy not fully (correctly) implemented yet!");
}

Here is the call graph for this function:

real PLearn::NGramDistribution::survival_fn ( const Vec y) const [virtual]

Return survival function: P(Y>y | x).

Reimplemented from PLearn::PDistribution.

Definition at line 333 of file NGramDistribution.cc.

References PLERROR.

{
    PLERROR("survival_fn not implemented for NGramDistribution"); return 0;
}
void PLearn::NGramDistribution::train ( ) [virtual]

For this distribution, won't do anything. Just implemented to work with PTester.

Reimplemented from PLearn::PDistribution.

Definition at line 369 of file NGramDistribution.cc.

References PLearn::abs(), PLearn::diff(), EM_PRECISION, PLearn::endl(), PLearn::TVec< T >::fill(), getNGrams(), i, j, lambda_estimation, lambdas, PLearn::TVec< T >::length(), PLearn::VMat::length(), n, PLearn::PLearner::nstages, PLERROR, PLearn::TVec< T >::resize(), PLearn::safeflog(), smoothing, PLearn::PLearner::stage, PLearn::sum(), THIS_PRECISION, PLearn::PLearner::train_set, tree, PLearn::PLearner::validation_set, PLearn::PLearner::verbosity, and voc_size.

{

//    if(smoothing == "jelinek-mercer" && lambda_estimation == "EM")
//    {
//        if(validation_proportion <= 0 || validation_proportion >= 1)
//            PLERROR("In NGramDistribution:build_() : validation_proportion should be in (0,1)");
//        // Making FractionSplitter
//        PP<FractionSplitter> fsplit = new FractionSplitter();
//        TMat<pair<real,real> > splits(1,2);
//        splits(0,0).first = 0; splits(0,0).second = 1-validation_proportion;
//        splits(0,1).first = 1-validation_proportion; splits(0,1).second = 1;
//        fsplit->splits = splits;
//        fsplit->build();
//
//        // Making RepeatSplitter
//        PP<RepeatSplitter> rsplit = new RepeatSplitter();
//        rsplit->n = 1;
//        rsplit->shuffle = true;
//        rsplit->seed = 123456;
//        rsplit->to_repeat = fsplit;
//        rsplit->setDataSet(train_set);
//        rsplit->build();
//
//        TVec<VMat> vmat_splits = rsplit->getSplit();
//        contexts_train = vmat_splits[0];
//        contexts_validation = vmat_splits[1];
//    }
//    else


    //Putting ngrams in the tree
    Vec row(n);
    TVec<int> int_row(n);

    if(stage == 0 && nstages>0)
    {
        PP<ProgressBar> pb =  new ProgressBar("Inserting ngrams in NGramTree", train_set->length());
        for(int i=0; i<train_set->length(); i++)
        {
            train_set->getRow(i,row);
            getNGrams(row,int_row);
            tree->add(int_row);
            
            pb->update(i+1);
        }
        stage++;
        if(smoothing == "jelinek-mercer" && lambda_estimation == "EM")
            stage--; //Will be incremented in EM estimation
    }

    // Smoothing techniques parameter estimation
    if(smoothing == "jelinek-mercer")
    {
        //Jelinek-Mercer: EM estimation of lambdas
        if(lambda_estimation == "EM")
        {
            if(stage == 0) 
            {
                lambdas.resize(n+1); lambdas.fill(1.0/(n+1));
            }
            if(!validation_set) PLERROR("In NGramDistribution:build_() : "
                                        "validation_set needs to be provided");
            real diff = EM_PRECISION+1;
            real l_old = 0, l_new = -REAL_MAX;
            Vec e(n+1);
            Vec p(n+1);
            TVec<int> ngram(n);
            real p_sum = 0;
            int n_ngram = 0;
            //while(diff > EM_PRECISION)
            while(stage < nstages)
            {
                if(verbosity > 2)
                    cout << "EM diff: " << diff << endl;
                n_ngram = 0;
                l_old = l_new; l_new = 0;

                // E step

                e.fill(0);
                //for(int t=0; t<contexts_validation->length(); t++)
                for(int t=0; t<validation_set->length(); t++)
                {
                    p_sum = 0;

                    // get w_{t-n+1}^t

                    //contexts_validation->getRow(t,row);
                    validation_set->getRow(t,row);
                    getNGrams(row,ngram);

                    TVec<int> freq = tree->freq(ngram);
                    TVec<int> normalization = tree->normalization(ngram);
                    if(normalization[ngram.length()-1] != 0)
                    {
                        n_ngram++;
                        p.fill(0);
                        p[0] = lambdas[0]*1.0/voc_size;
                        p_sum += p[0];
                        for(int j=0; j<ngram.length(); j++)
                        {
                            p[j+1] = lambdas[j+1]*(((real)freq[j])/normalization[j]);
                            p_sum += p[j+1];
                        }

                        for(int j=0; j<e.length(); j++)
                            e[j] += p[j]/p_sum;
                        l_new += safeflog(p_sum);
                    }
                }
                if(n_ngram == 0) PLERROR("In NGramDistribution:train() : no ngram in validation set");
                // M step
                for(int j=0; j<lambdas.length(); j++)
                    lambdas[j] = e[j]/n_ngram;

                diff = l_new-l_old;
                stage++;
            }

            //Test

            real temp = 0;
            for(int j=0; j<lambdas.length(); j++)
                temp += lambdas[j];
            if(abs(temp-1) > THIS_PRECISION)
                PLERROR("oups, lambdas don't sum to one after EM!!");
        }
        else if(lambda_estimation == "manual")
        {
            if(lambdas.length() != n+1) PLERROR("In NGramDistribution:build_() : lambdas' length should be %d, not %d", n+1, lambdas.length());
            real sum = 0;
            for(int j=0; j<lambdas.length(); j++)
            {
                if(lambdas[j]<0) PLERROR("In NGramDistribution:build_() : all lambdas should be non negative");
                sum += lambdas[j];
            }
            if(abs(sum) < THIS_PRECISION)
                lambdas.fill(1.0/(n+1));
            else
                lambdas *= 1.0/sum;
        }
        else PLERROR("In NGramDistribution:build_() : lambda estimation not valid");

    }

}

Here is the call graph for this function:

void PLearn::NGramDistribution::variance ( Mat cov) const [virtual]

Return Var[Y | x].

Reimplemented from PLearn::PDistribution.

Definition at line 341 of file NGramDistribution.cc.

References PLERROR.

{
    PLERROR("variance not implemented for NGramDistribution");
}

Member Data Documentation

Reimplemented from PLearn::PDistribution.

Definition at line 141 of file NGramDistribution.h.

Additive constant for Add-delta smoothing.

Definition at line 86 of file NGramDistribution.h.

Referenced by declareOptions(), and density().

Discount constant for absolute discounting smoothing.

Definition at line 89 of file NGramDistribution.h.

Referenced by declareOptions(), and density().

Lambda estimation technique.

Definition at line 95 of file NGramDistribution.h.

Referenced by declareOptions(), and train().

Lambdas for Jelinek-Mercer smoothing.

Definition at line 98 of file NGramDistribution.h.

Referenced by declareOptions(), density(), makeDeepCopyFromShallowCopy(), and train().

N in NGram.

Definition at line 83 of file NGramDistribution.h.

Referenced by build(), build_(), declareOptions(), density(), getNGrams(), and train().

Replace nan values with -1.

Definition at line 80 of file NGramDistribution.h.

Referenced by build_(), declareOptions(), and getNGrams().

Smoothing parameter.

Definition at line 92 of file NGramDistribution.h.

Referenced by build_(), declareOptions(), density(), and train().

NGram tree.

Definition at line 101 of file NGramDistribution.h.

Referenced by declareOptions(), density(), forget(), makeDeepCopyFromShallowCopy(), and train().

Definition at line 71 of file NGramDistribution.h.

Referenced by build_(), declareOptions(), density(), and train().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines