PLearn 0.1
Public Types | Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Protected Member Functions | Static Protected Member Functions | Protected Attributes | Private Member Functions | Static Private Attributes
PLearn::Learner Class Reference

#include <Learner.h>

Inheritance diagram for PLearn::Learner:
Inheritance graph
[legend]
Collaboration diagram for PLearn::Learner:
Collaboration graph
[legend]

List of all members.

Public Types

typedef Object inherited

Public Member Functions

string basename () const
 returns expdir+train_set->getAlias() (if train_set is indeed defined and has an alias...)
 Learner (int the_inputsize=0, int the_targetsize=0, int the_outputsize=0)
virtual ~Learner ()
virtual void setExperimentDirectory (const PPath &the_expdir)
 The experiment directory is the directory in which files related to this model are to be saved.
string getExperimentDirectory () const
virtual LearnerdeepCopy (CopiesMap &copies) const
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Does the necessary operations to transform a shallow copy (this) into a deep copy by deep-copying all the members that need to be.
virtual void build ()
 **** SUBCLASS WRITING: **** This method should be redefined in subclasses, to just call inherited::build() and then build_()
virtual void setTrainingSet (VMat training_set)
 Declare the train_set.
VMat getTrainingSet ()
virtual void train (VMat training_set)=0
virtual void newtrain (VecStatsCollector &train_stats)
virtual void newtest (VMat testset, VecStatsCollector &test_stats, VMat testoutputs=0, VMat testcosts=0)
 Should perform test on testset, updating test cost statistics, and optionally filling testoutputs and testcosts.
virtual void train (VMat training_set, VMat accept_prob, real max_accept_prob=1.0, VMat weights=VMat())
virtual void use (const Vec &input, Vec &output)=0
virtual void use (const Mat &inputs, Mat outputs)
virtual void computeOutput (const VVec &input, Vec &output)
 *** SUBCLASS WRITING: *** This should be overloaded in subclasses to compute the output from the input
virtual void computeCostsFromOutputs (const VVec &input, const Vec &output, const VVec &target, const VVec &weight, Vec &costs)
 *** SUBCLASS WRITING: *** This should be overloaded in subclasses to compute the weighted costs from already computed output.
virtual void computeOutputAndCosts (const VVec &input, VVec &target, const VVec &weight, Vec &output, Vec &costs)
 Default calls computeOutput and computeCostsFromOutputs You may overload this if you have a more efficient way to compute both output and weighted costs at the same time.
virtual void computeCosts (const VVec &input, VVec &target, VVec &weight, Vec &costs)
 Default calls computeOutputAndCosts This may be overloaded if there is a more efficient way to compute the costs directly, without computing the whole output vector.
virtual void setModel (const Vec &new_options)
virtual void forget ()
virtual bool measure (int step, const Vec &costs)
virtual void oldwrite (ostream &out) const
void save (const PPath &filename="") const
 DEPRECATED. Call PLearn::save(filename, object) instead.
void load (const PPath &filename="")
 DEPRECATED. Call PLearn::load(filename, object) instead.
virtual void stop_if_wanted ()
 stopping condition, by default when a file named experiment_name + "_stop" is found to exist.
int inputsize () const
 Simple accessor methods: (do NOT overload! Set inputsize_ and outputsize_ instead)
int targetsize () const
int outputsize () const
int weightsize () const
int epoch () const
virtual int costsize () const
 **** SUBCLASS WRITING: should be re-defined if user re-defines computeCost default version returns
void setTestCostFunctions (Array< CostFunc > costfunctions)
 Call this method to define what cost functions are computed by default (these are generic cost functions which compare the output with the target)
void setTestStatistics (StatsItArray statistics)
 This method defines what statistics are computed on the costs (which compute a vector of statistics that depend on all the test costs)
virtual void setTestDuringTrain (ostream &testout, int every, Array< VMat > testsets)
 testout: the stream where the test results are to be written every: how often (number of iterations) the tests should be performed
virtual void setTestDuringTrain (Array< VMat > testsets)
const Array< VMat > & getTestDuringTrain () const
 return the test sets that are used during training
void setEarlyStopping (int which_testset, int which_testresult, real max_degradation, real min_value=-FLT_MAX, real min_improvement=0, bool relative_changes=true, bool save_best=true, int max_degraded_steps=-1)
virtual void computeCost (const Vec &input, const Vec &target, const Vec &output, const Vec &cost)
 computes the cost vec, given input, target and output The default version applies the declared CostFunc's on the (output,target) pair, putting the cost computed for each CostFunc in an element of the cost vector.
virtual void useAndCost (const Vec &input, const Vec &target, Vec output, Vec cost)
 By default this function calls use(input, output) and then computeCost(input, target, output, cost) So you can overload computeCost to change cost computation.
virtual void useAndCostOnTestVec (const VMat &test_set, int i, const Vec &output, const Vec &cost)
 Default version calls useAndCost on test_set(i) so you don't need to overload this method unless you want to provide a more efficient implementation (for ex.
virtual void apply (const VMat &data, VMat outputs)
virtual void applyAndComputeCosts (const VMat &data, VMat outputs, VMat costs)
virtual void applyAndComputeCostsOnTestMat (const VMat &test_set, int i, const Mat &output_block, const Mat &cost_block)
 Like useAndCostOnTestVec, but on a block (of length minibatch_size) of rows from the test set: apply learner and compute outputs and costs for the block of test_set rows starting at i.
virtual void computeCosts (const VMat &data, VMat costs)
virtual void computeLeaveOneOutCosts (const VMat &data, VMat costs)
 For each data point i, trains with dataset removeRow(data,i) and calls useAndCost on point i, puts results in costs vmat.
virtual void computeLeaveOneOutCosts (const VMat &data, VMat costsmat, CostFunc costf)
Vec computeTestStatistics (const VMat &costs)
virtual Vec test (VMat test_set, const string &save_test_outputs="", const string &save_test_costs="")
 This function should work with and without MPI.
virtual Array< string > costNames () const
virtual Array< string > testResultsNames () const
virtual Array< string > trainObjectiveNames () const
 returns an array of strings corresponding to the names of the fields that will be written to objectiveout (by default this calls testResultsNames() )
void appendMeasurer (Measurer &measurer)
Vec getTrainCost ()

Static Public Member Functions

static PStreamdefault_vlog ()
 The default stream to which lout is set upon construction of all Learners (defaults to cout)
static string _classname_ ()
 Does the necessary operations to transform a shallow copy (this) into a deep copy by deep-copying all the members that need to be.
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

int inputsize_
 The data VMat's are assumed to be formed of inputsize()
int targetsize_
 columns followed by targetsize() columns.
int outputsize_
 the use() method produces an output vector of size outputsize().
int weightsize_
 number of weight fields in the target vec (all_targets = actual_target & weights)
bool dont_parallelize
 By default, MPI parallelization done at given level prevents further parallelization at lower levels.
PStream testout
 test during train specifications
int test_every
Vec avg_objective
 average of the objective function(s) over the last test_every steps
Vec avgsq_objective
 average of the squared objective function(s) over the last test_every steps
VMat train_set
 the current set being used for training
Array< VMattest_sets
 test sets to test on during train
int minibatch_size
 test by blocks of this size using apply rather than use
int report_test_progress_every
Vec options
 DEPRECATED options in the construction of the model through setModel.
int earlystop_testsetnum
 early-stopping parameters
int earlystop_testresultindex
 index of statistic (as returned by test) to use
real earlystop_max_degradation
 maximum degradation in error from last best value
real earlystop_min_value
 minimum error beyond which we stop
real earlystop_min_improvement
 minimum improvement in error otherwise we stop
bool earlystop_relative_changes
 are max_degradation and min_improvement relative?
bool earlystop_save_best
 if yes, then return with saved "best" model
int earlystop_max_degraded_steps
 max. nb of steps beyond best found [in version >= 1]
bool save_at_every_epoch
 save learner at each epoch?
bool save_objective
 whether to save in basename()+".objective" the cost after each measure (e.g. after each epoch)
int best_step
 the step (usually epoch) at which validation cost was best
real earlystop_minval
string experiment_name
Array< CostFunctest_costfuncs
StatsItArray test_statistics
PStream vlog
 The log stream to which all the verbose output from this learner should be sent.
PStream objectiveout
 The log stream to use to record the objective function during training.
Vec vec_input
 **Next generation** learners allow inputs to be anything, not just Vec

Static Public Attributes

static int use_file_if_bigger = 64000000L
 number of elements above which a file VMatrix rather
static bool force_saving_on_all_processes = false
 than an in-memory one should be used (when computing statistics requiring multiple passes over a test set)
static StaticInitializer _static_initializer_

Protected Member Functions

void openTrainObjectiveStream ()
 opens the train.objective file for appending in the expdir
ostream & getTrainObjectiveStream ()
 resturns the stream for writing train objective (and other costs) The stream is opened by calling openTrainObjectivestream if it wasn't already
void openTestResultsStreams ()
 opens the files in append mode for writing the test results
ostream & getTestResultsStream (int k)
 Returns the stream corresponding to testset #k (as specified by setTestDuringTrain) The stream is opened by calling opentestResultsStreams if it wasn's already.
void freeTestResultsStreams ()
 frees the resources used by the test_results_streams
void outputResultLineToFile (const string &filename, const Vec &results, bool append, const string &names)
 output a test result line to a file
void setTrainCost (Vec &cost)

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declare options (data fields) for the class.

Protected Attributes

Vec tmpvec
ofstream * train_objective_stream
 file stream where to save objecties and costs during training
Array< ofstream * > test_results_streams
 opened streams where to save test results
string expdir
 the directory in which to save files related to this model (see setExperimentDirectory()) You may assume that it ends with a slash (setExperimentDirectory(...) ensures this).
int epoch_
 It's used as part of the model filename saved by calling save(), which measure() does if ??? incomplete ???
bool distributed_
 This is set to true to indicate that MPI parallelization occured at the level of this learner possibly with data distributed across several nodes (in which case PLMPI::synchronized should be false) (this is initially false)
real earlystop_previousval
 temporary values relevant for early stopping
Array< Measurer * > measurers
 array of measurers:
bool measure_cpu_time_first
bool each_cpu_saves_its_errors
Vec train_cost

Private Member Functions

void build_ ()

Static Private Attributes

static Vec tmp_input
static Vec tmp_target
static Vec tmp_weight
static Vec tmp_output
static Vec tmp_costs

Detailed Description

Deprecated:
This class is DEPRECATED, derive from PLearner instead.

The base class for learning algorithms, which should be the main "products" of PLearn.

The main thing that a Learner can do are: void train(VMat training_set); < get trained void use(const Vec& input, Vec& output); < compute output given input Vec test(VMat test_set); < compute some performance statistics on a test set < compute outputs and costs when applying trained model on data void applyAndComputeCosts(const VMat& data, VMat outputs, VMat costs);

Definition at line 73 of file Learner.h.


Member Typedef Documentation


Constructor & Destructor Documentation

PLearn::Learner::Learner ( int  the_inputsize = 0,
int  the_targetsize = 0,
int  the_outputsize = 0 
)

**** SUBCLASS WRITING: **** All subclasses of Learner should implement this form of constructor Constructors should simply set all build options (member variables) to acceptable values and call build() that will do the actual job of constructing the object.

Definition at line 74 of file Learner.cc.

References default_vlog(), PLearn::mean_stats(), measure_cpu_time_first, minibatch_size, report_test_progress_every, setEarlyStopping(), setTestStatistics(), PLearn::stderr_stats(), test_every, and vlog.

    :train_objective_stream(0), epoch_(0), distributed_(false),
     inputsize_(the_inputsize), targetsize_(the_targetsize), outputsize_(the_outputsize), 
     weightsize_(0), dont_parallelize(false), save_at_every_epoch(false), save_objective(true), best_step(0)
{
    test_every = 1;
    minibatch_size = 1; // by default call use, not apply
    setEarlyStopping(-1, 0, 0); // No early stopping by default
    vlog = default_vlog();
    report_test_progress_every = 10000;
    measure_cpu_time_first=false;
    setTestStatistics(mean_stats() & stderr_stats());
}

Here is the call graph for this function:

PLearn::Learner::~Learner ( ) [virtual]

Definition at line 374 of file Learner.cc.

References freeTestResultsStreams(), and train_objective_stream.

Here is the call graph for this function:


Member Function Documentation

string PLearn::Learner::_classname_ ( ) [static]

Does the necessary operations to transform a shallow copy (this) into a deep copy by deep-copying all the members that need to be.

Reimplemented from PLearn::Object.

Reimplemented in PLearn::ConditionalDistribution, PLearn::ConditionalGaussianDistribution, PLearn::Distribution, PLearn::EmpiricalDistribution, PLearn::LocallyWeightedDistribution, PLearn::NeuralNet, and PLearn::GraphicalBiText.

Definition at line 88 of file Learner.cc.

: Derive from PLearner instead", "NO HELP");
OptionList & PLearn::Learner::_getOptionList_ ( ) [static]
RemoteMethodMap & PLearn::Learner::_getRemoteMethodMap_ ( ) [static]
bool PLearn::Learner::_isa_ ( const Object o) [static]
StaticInitializer Learner::_static_initializer_ & PLearn::Learner::_static_initialize_ ( ) [static]
void PLearn::Learner::appendMeasurer ( Measurer measurer) [inline]

Declare a new measurer whose measure method will be called when the measure method of this learner is called (in particular after each training epoch).

Definition at line 555 of file Learner.h.

    { measurers.append(&measurer); }
void PLearn::Learner::apply ( const VMat data,
VMat  outputs 
) [virtual]

Calls the 'use' method many times on the first inputsize() elements of each row of a 'data' VMat, and put the machine's 'outputs' in a writable VMat (e.g. maybe a file, or a matrix). Note: if one wants to compute costs as well, then the method applyAndComputeCosts should be called instead.

Definition at line 538 of file Learner.cc.

References i, inputsize(), PLearn::VMat::length(), n, outputsize(), PLearn::TVec< T >::subVec(), use(), and PLearn::VMat::width().

{
    int n=data.length();
    Vec data_row(data.width());
    Vec input = data_row.subVec(0,inputsize());
    Vec output(outputsize());
    for (int i=0;i<n;i++)
    {
        data->getRow(i,data_row); // also gets input_row and target
        use(input,output);
        outputs->putRow(i,output);
    }
}

Here is the call graph for this function:

void PLearn::Learner::applyAndComputeCosts ( const VMat data,
VMat  outputs,
VMat  costs 
) [virtual]

This method calls useAndCost repetitively on all the rows of data, putting all the resulting output and cost vectors in the outputs and costs VMat's.

Definition at line 615 of file Learner.cc.

References costsize(), i, PLearn::VMat::length(), minibatch_size, n, outputsize(), PLearn::TVec< T >::subVec(), and useAndCostOnTestVec().

Referenced by applyAndComputeCostsOnTestMat().

{
    int n=data.length();
    int ncostfuncs = costsize();
    Vec output_row(outputsize()*minibatch_size);
    Vec costs_row(ncostfuncs);
    for (int i=0;i*minibatch_size<n;i++)
    {
        // data->getRow(i,data_row); // also gets input_row and target
        useAndCostOnTestVec(data, i*minibatch_size, output_row, costs_row);
        // useAndCostOnTestVec(data, i, output_row, costs_row);
        // useAndCost(input_row,target,output_row,costs_row); // does the work
        //outputs->putRow(i,output_row); // save the outputs          
        for (int k=0; k<minibatch_size; k++)
        {
            outputs->putRow(i+k,output_row.subVec(k*outputsize(),outputsize())); // save the outputs
        }
        costs->putRow(i,costs_row); // save the costs
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::Learner::applyAndComputeCostsOnTestMat ( const VMat test_set,
int  i,
const Mat output_block,
const Mat cost_block 
) [virtual]

Like useAndCostOnTestVec, but on a block (of length minibatch_size) of rows from the test set: apply learner and compute outputs and costs for the block of test_set rows starting at i.

By default calls applyAndComputeCosts.

Definition at line 824 of file Learner.cc.

References applyAndComputeCosts(), PLearn::TMat< T >::length(), and PLearn::VMat::subMatRows().

Referenced by test().

{
    applyAndComputeCosts(test_set.subMatRows(i,output_block.length()),output_block,cost_block);
    //applyAndComputeCosts(test_set.subMatRows(i,output_block.length()*minibatch_size),output_block,cost_block);
}

Here is the call graph for this function:

Here is the caller graph for this function:

string PLearn::Learner::basename ( ) const

returns expdir+train_set->getAlias() (if train_set is indeed defined and has an alias...)

Definition at line 118 of file Learner.cc.

References PLearn::Object::classname(), expdir, experiment_name, PLERROR, PLWARNING, and train_set.

Referenced by measure(), and stop_if_wanted().

{       
    if(!experiment_name.empty())
    {
        PLWARNING("** Warning: the experiment_name system is DEPRECATED, please use the expdir system from now on, through setExperimentDirectory, and don't set an experiment_name. For now I'll be using the specified experiment_name=%s as the default basename for your results, but this won't be supported in the future",experiment_name.c_str());
        return experiment_name;
    }
    else if(expdir.empty())
    {
        PLERROR("Problem in Learner: Please call setExperimentDirectory for your learner prior to calling a train/test");
    }
    else if(!train_set)
    {
        PLWARNING("You should call setTrainingSet at the beginning of the train method in class %s ... Using 'unknown' as alias for now...", classname().c_str());
        return expdir + "unknown";
    }
    /* Aliases are now removed.
       else if(train_set->getAlias().empty())
       {
       //PLWARNING("The training set has no alias defined for it (you could call setAlias(...)) Using 'unknown' as alias");
       return expdir + "unknown";
       }
       return expdir+train_set->getAlias();
    */
    PLERROR("In Learner::basename - The alias system is now out-of-order, update your code !");
    return "";
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::Learner::build ( ) [virtual]

**** SUBCLASS WRITING: **** This method should be redefined in subclasses, to just call inherited::build() and then build_()

Reimplemented from PLearn::Object.

Reimplemented in PLearn::ConditionalGaussianDistribution, PLearn::Distribution, PLearn::LocallyWeightedDistribution, PLearn::NeuralNet, and PLearn::GraphicalBiText.

Definition at line 240 of file Learner.cc.

References PLearn::Object::build(), and build_().

Referenced by PLearn::NeuralNet::build(), PLearn::GraphicalBiText::build(), and PLearn::Distribution::build().

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::Learner::build_ ( ) [private]

**** SUBCLASS WRITING: **** The build_ and build methods should be redefined in subclasses build_ should do the actual building of the Learner according to build options (member variables) previously set. (These may have been set by hand, by a constructor, by the load method, or by setOption) As build() may be called several times (after changing options, to "rebuild" an object with different build options), make sure your implementation can handle this properly.

Reimplemented from PLearn::Object.

Reimplemented in PLearn::Distribution, PLearn::LocallyWeightedDistribution, PLearn::NeuralNet, and PLearn::GraphicalBiText.

Definition at line 233 of file Learner.cc.

References earlystop_minval, and earlystop_previousval.

Referenced by build().

{
    // Early stopping initialisation
    earlystop_previousval = FLT_MAX;
    earlystop_minval = FLT_MAX;
}

Here is the caller graph for this function:

virtual void PLearn::Learner::computeCost ( const Vec input,
const Vec target,
const Vec output,
const Vec cost 
) [virtual]

computes the cost vec, given input, target and output The default version applies the declared CostFunc's on the (output,target) pair, putting the cost computed for each CostFunc in an element of the cost vector.

If you overload this method in subclasses (e.g. to compute a cost that depends on the internal elements of the model), you must also redefine costsize() and costNames() accordingly.

Referenced by PLearn::TopDownAsymetricDeepNetwork::computeCostsFromOutputs(), and PLearn::NeuralNet::useAndCost().

Here is the caller graph for this function:

virtual void PLearn::Learner::computeCosts ( const VVec input,
VVec target,
VVec weight,
Vec costs 
) [virtual]

Default calls computeOutputAndCosts This may be overloaded if there is a more efficient way to compute the costs directly, without computing the whole output vector.

void PLearn::Learner::computeCosts ( const VMat data,
VMat  costs 
) [virtual]

This method calls useAndCost repetitively on all the rows of data, throwing away the resulting output vectors but putting all the cost vectors in the costs VMat.

Definition at line 555 of file Learner.cc.

References costsize(), PLearn::endl(), i, PLearn::VMat::length(), minibatch_size, n, outputsize(), and useAndCostOnTestVec().

{
    int n=data.length();
    int ncostfuncs = costsize();
    Vec output_row(outputsize());
    Vec cost(ncostfuncs);
    cout << ncostfuncs << endl;
    for (int i=0;i*minibatch_size<n;i++)
    {
        useAndCostOnTestVec(data, i*minibatch_size, output_row, cost);
        costs->putRow(i,cost); // save the costs
    }
}

Here is the call graph for this function:

virtual void PLearn::Learner::computeCostsFromOutputs ( const VVec input,
const Vec output,
const VVec target,
const VVec weight,
Vec costs 
) [virtual]

*** SUBCLASS WRITING: *** This should be overloaded in subclasses to compute the weighted costs from already computed output.

Referenced by PLearn::NeuralProbabilisticLanguageModel::computeOutputAndCosts().

Here is the caller graph for this function:

void PLearn::Learner::computeLeaveOneOutCosts ( const VMat data,
VMat  costs 
) [virtual]

For each data point i, trains with dataset removeRow(data,i) and calls useAndCost on point i, puts results in costs vmat.

Definition at line 569 of file Learner.cc.

References costsize(), PLearn::flush(), i, PLearn::VMat::length(), outputsize(), PLearn::removeRow(), train(), useAndCostOnTestVec(), and vlog.

{
    // Vec testsample(inputsize()+targetsize());
    // Vec testinput = testsample.subVec(0,inputsize());
    // Vec testtarget = testsample.subVec(inputsize(),targetsize());
    Vec output(outputsize());
    Vec cost(costsize());
    // VMat subset;
    for(int i=0; i<data.length(); i++)
    {
        // data->getRow(i,testsample);
        train(removeRow(data,i));
        useAndCostOnTestVec(data, i, output, cost);
        // useAndCost(testinput,testtarget,output,cost);
        costsmat->putRow(i,cost);
        vlog << '.' << flush;
        if(i%100==0)
            vlog << '\n' << i << flush;
    }
}

Here is the call graph for this function:

void PLearn::Learner::computeLeaveOneOutCosts ( const VMat data,
VMat  costsmat,
CostFunc  costf 
) [virtual]

Same as above, except a single cost passed as argument is computed, rather than all the Learner's costs setTestCostFunctions (and its possible additional internal cost).

Definition at line 590 of file Learner.cc.

References PLearn::flush(), i, inputsize(), PLearn::VMat::length(), outputsize(), PLERROR, PLearn::removeRow(), PLearn::TVec< T >::subVec(), targetsize(), train(), use(), vlog, and PLearn::VMat::width().

{
    // norman: added parenthesis to clarify precendence
    if( (costsmat.length() != data.length()) | (costsmat.width()!=1))
        PLERROR("In Learner::computeLeaveOneOutCosts bad dimensions for costsmat VMat");
    Vec testsample(inputsize()+targetsize());
    Vec testinput = testsample.subVec(0,inputsize());
    Vec testtarget = testsample.subVec(inputsize(),targetsize());
    Vec output(outputsize());
    VMat subset;
    for(int i=0; i<data.length(); i++)
    {
        data->getRow(i,testsample);
        train(removeRow(data,i));
        use(testinput,output);
        costsmat->put(i,0,costf(output,testtarget));
        vlog << '.' << flush;
        if(i%100==0)
            vlog << '\n' << i << flush;
    }
}

Here is the call graph for this function:

virtual void PLearn::Learner::computeOutput ( const VVec input,
Vec output 
) [virtual]

*** SUBCLASS WRITING: *** This should be overloaded in subclasses to compute the output from the input

Referenced by PLearn::NeuralProbabilisticLanguageModel::computeOutputAndCosts().

Here is the caller graph for this function:

virtual void PLearn::Learner::computeOutputAndCosts ( const VVec input,
VVec target,
const VVec weight,
Vec output,
Vec costs 
) [virtual]

Default calls computeOutput and computeCostsFromOutputs You may overload this if you have a more efficient way to compute both output and weighted costs at the same time.

Vec PLearn::Learner::computeTestStatistics ( const VMat costs)

Given a VMat of costs as computed for example with computeCosts or with applyAndComputeCosts, compute and the test statistics over those costs. This is the concatenation of the statistics computed for each of the columns (cost functions) of costs.

Definition at line 636 of file Learner.cc.

References PLearn::StatsItArray::computeStats(), PLearn::concat(), and test_statistics.

{
    return concat(test_statistics.computeStats(costs));
}

Here is the call graph for this function:

Array< string > PLearn::Learner::costNames ( ) const [virtual]

returns an Array of strings for the names of the components of the cost. Default version returns the info() strings of the cost functions in test_costfuncs

Reimplemented in PLearn::NeuralNet.

Definition at line 838 of file Learner.cc.

References i, PLearn::Object::info(), PLearn::TVec< T >::size(), PLearn::space_to_underscore(), and test_costfuncs.

Referenced by testResultsNames().

{
    Array<string> cost_names(test_costfuncs.size());
    for (int i=0; i<cost_names.size(); i++)
        cost_names[i] = space_to_underscore(test_costfuncs[i]->info());
    return cost_names;
}

Here is the call graph for this function:

Here is the caller graph for this function:

int PLearn::Learner::costsize ( ) const [virtual]

**** SUBCLASS WRITING: should be re-defined if user re-defines computeCost default version returns

Reimplemented in PLearn::NeuralNet.

Definition at line 835 of file Learner.cc.

References PLearn::TVec< T >::size(), and test_costfuncs.

Referenced by applyAndComputeCosts(), computeCosts(), computeLeaveOneOutCosts(), and test().

{ return test_costfuncs.size(); }

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::Learner::declareOptions ( OptionList ol) [static, protected]

Declare options (data fields) for the class.

Redefine this in subclasses: call declareOption(...) for each option, and then call inherited::declareOptions(options). Please call the inherited method AT THE END to get the options listed in a consistent order (from most recently defined to least recently defined).

  static void MyDerivedClass::declareOptions(OptionList& ol)
  {
      declareOption(ol, "inputsize", &MyObject::inputsize_,
                    OptionBase::buildoption,
                    "The size of the input; it must be provided");
      declareOption(ol, "weights", &MyObject::weights,
                    OptionBase::learntoption,
                    "The learned model weights");
      inherited::declareOptions(ol);
  }
Parameters:
olList of options that is progressively being constructed for the current class.

Reimplemented from PLearn::Object.

Reimplemented in PLearn::ConditionalGaussianDistribution, PLearn::Distribution, PLearn::EmpiricalDistribution, PLearn::LocallyWeightedDistribution, PLearn::NeuralNet, and PLearn::GraphicalBiText.

Definition at line 147 of file Learner.cc.

References PLearn::OptionBase::buildoption, PLearn::declareOption(), PLearn::Object::declareOptions(), dont_parallelize, earlystop_max_degradation, earlystop_max_degraded_steps, earlystop_min_improvement, earlystop_min_value, earlystop_relative_changes, earlystop_save_best, earlystop_testresultindex, earlystop_testsetnum, expdir, inputsize_, minibatch_size, outputsize_, save_at_every_epoch, save_objective, targetsize_, test_costfuncs, test_every, test_statistics, and weightsize_.

Referenced by PLearn::NeuralNet::declareOptions(), and PLearn::Distribution::declareOptions().

{
    declareOption(ol, "inputsize", &Learner::inputsize_, OptionBase::buildoption, 
                  "dimensionality of input vector \n");

    declareOption(ol, "outputsize", &Learner::outputsize_, OptionBase::buildoption, 
                  "dimensionality of output \n");

    declareOption(ol, "targetsize", &Learner::targetsize_, OptionBase::buildoption, 
                  "dimensionality of target \n");

    declareOption(ol, "weightsize", &Learner::weightsize_, OptionBase::buildoption, 
                  "Number of weights within target.  The last 'weightsize' fields of the target vector will be used as cost weights.\n"
                  "This is usually 0 (no weight) or 1 (1 weight per sample). Special loss functions may be able to give a meaning\n"
                  "to weightsize>1. Not all learners support weights.");

    declareOption(ol, "dont_parallelize", &Learner::dont_parallelize, OptionBase::buildoption, 
                  "By default, MPI parallelization done at a given level prevents further parallelization\n"
                  "at levels further down. If true, this means *don't parallelize processing at this level*");

    declareOption(ol, "earlystop_testsetnum", &Learner::earlystop_testsetnum, OptionBase::buildoption, 
                  "    index of test set (in test_sets) to use for early \n"
                  "    stopping (-1 means no early-stopping) \n");

    declareOption(ol, "earlystop_testresultindex", &Learner::earlystop_testresultindex, OptionBase::buildoption, 
                  "    index of statistic (as returned by test) to use\n");

    declareOption(ol, "earlystop_max_degradation", &Learner::earlystop_max_degradation, OptionBase::buildoption, 
                  "    maximum degradation in error from last best value\n");

    declareOption(ol, "earlystop_min_value", &Learner::earlystop_min_value, OptionBase::buildoption, 
                  "    minimum error beyond which we stop\n");

    declareOption(ol, "earlystop_min_improvement", &Learner::earlystop_min_improvement, OptionBase::buildoption, 
                  "    minimum improvement in error otherwise we stop\n");

    declareOption(ol, "earlystop_relative_changes", &Learner::earlystop_relative_changes, OptionBase::buildoption, 
                  "    are max_degradation and min_improvement relative?\n");

    declareOption(ol, "earlystop_save_best", &Learner::earlystop_save_best, OptionBase::buildoption, 
                  "    if yes, then return with saved 'best' model\n");

    declareOption(ol, "earlystop_max_degraded_steps", &Learner::earlystop_max_degraded_steps, OptionBase::buildoption, 
                  "    ax. nb of steps beyond best found (-1 means ignore) \n");

    declareOption(ol, "save_at_every_epoch", &Learner::save_at_every_epoch, OptionBase::buildoption, 
                  "    save learner at each epoch?\n");

    declareOption(ol, "save_objective", &Learner::save_objective, OptionBase::buildoption, 
                  "    save objective at each epoch?\n");

    declareOption(ol, "expdir", &Learner::expdir, OptionBase::buildoption,
                  "   The directory in which to save results \n");

    declareOption(ol, "test_costfuncs", &Learner::test_costfuncs, OptionBase::buildoption,
                  "   The cost functions used by the default useAndCost method \n");

    declareOption(ol, "test_statistics", &Learner::test_statistics, OptionBase::buildoption,
                  "   The test statistics used by the default test method \n",
                  "mean_stats() & stderr_stats()");

    declareOption(ol, "test_every", &Learner::test_every, OptionBase::buildoption, 
                  "   Compute cost on the test set every <test_every> steps (if 0, then no test is done during training\n");

    declareOption(ol, "minibatch_size", &Learner::minibatch_size, 
                  OptionBase::buildoption, 
                  "   size of blocks over which to perform tests, calling 'apply' if >1, otherwise caling 'use'\n");

    inherited::declareOptions(ol);
}

Here is the call graph for this function:

Here is the caller graph for this function:

static const PPath& PLearn::Learner::declaringFile ( ) [inline, static]
Learner * PLearn::Learner::deepCopy ( CopiesMap copies) const [virtual]
PStream & PLearn::Learner::default_vlog ( ) [static]

The default stream to which lout is set upon construction of all Learners (defaults to cout)

Definition at line 64 of file Learner.cc.

References PLearn::PStream::outmode, and PLearn::PStream::raw_ascii.

Referenced by Learner().

{
    //  static oassignstream default_vlog = cout;
    static PStream default_vlog(&cout);
    default_vlog.outmode=PStream::raw_ascii;
    return default_vlog;
}

Here is the caller graph for this function:

int PLearn::Learner::epoch ( ) const [inline]

Definition at line 409 of file Learner.h.

Referenced by measure().

{ return epoch_; }

Here is the caller graph for this function:

void PLearn::Learner::forget ( ) [virtual]

*** SUBCLASS WRITING: *** This method should be called AFTER or inside the build method, e.g. in order to re-initialize parameters. It should put the Learner in a 'fresh' state, not being influenced by any past call to train (everything learned is forgotten!).

Reimplemented in PLearn::NeuralNet.

Definition at line 246 of file Learner.cc.

References earlystop_minval, earlystop_previousval, and epoch_.

Referenced by PLearn::NeuralNet::forget().

{
    // Early stopping parameters initialisation
    earlystop_previousval = FLT_MAX;
    earlystop_minval = FLT_MAX;
    epoch_ = 0;
}

Here is the caller graph for this function:

void PLearn::Learner::freeTestResultsStreams ( ) [protected]

frees the resources used by the test_results_streams

Definition at line 354 of file Learner.cc.

References n, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), and test_results_streams.

Referenced by openTestResultsStreams(), and ~Learner().

{
    int n = test_results_streams.size();
    for(int k=0; k<n; k++)
        delete test_results_streams[k];
    test_results_streams.resize(0);
}

Here is the call graph for this function:

Here is the caller graph for this function:

string PLearn::Learner::getExperimentDirectory ( ) const [inline]

Definition at line 230 of file Learner.h.

{ return expdir; }
const Array<VMat>& PLearn::Learner::getTestDuringTrain ( ) const [inline]

return the test sets that are used during training

Definition at line 436 of file Learner.h.

                                                  {
        return test_sets;
    }
ostream & PLearn::Learner::getTestResultsStream ( int  k) [protected]

Returns the stream corresponding to testset #k (as specified by setTestDuringTrain) The stream is opened by calling opentestResultsStreams if it wasn's already.

Definition at line 363 of file Learner.cc.

References openTestResultsStreams(), PLearn::TVec< T >::size(), and test_results_streams.

Here is the call graph for this function:

Vec PLearn::Learner::getTrainCost ( ) [inline]

Definition at line 565 of file Learner.h.

{ return train_cost; };
VMat PLearn::Learner::getTrainingSet ( ) [inline]

Definition at line 256 of file Learner.h.

{ return train_set; }
ostream & PLearn::Learner::getTrainObjectiveStream ( ) [protected]

resturns the stream for writing train objective (and other costs) The stream is opened by calling openTrainObjectivestream if it wasn't already

Definition at line 317 of file Learner.cc.

References openTrainObjectiveStream(), and train_objective_stream.

Here is the call graph for this function:

int PLearn::Learner::inputsize ( ) const [inline]

Simple accessor methods: (do NOT overload! Set inputsize_ and outputsize_ instead)

Definition at line 405 of file Learner.h.

Referenced by apply(), PLearn::NeuralNet::build_(), computeLeaveOneOutCosts(), PLearn::EmpiricalDistribution::EmpiricalDistribution(), PLearn::NeuralNet::initializeParams(), PLearn::Distribution::train(), and useAndCostOnTestVec().

{ return inputsize_; }

Here is the caller graph for this function:

void PLearn::Learner::load ( const PPath filename = "") [virtual]

DEPRECATED. Call PLearn::load(filename, object) instead.

Reimplemented from PLearn::Object.

Definition at line 931 of file Learner.cc.

References experiment_name, PLearn::Object::load(), and PLERROR.

Referenced by measure().

{
    if (!filename.empty())
        Object::load(filename);
    else if (!experiment_name.empty())
        Object::load(experiment_name);
    else
        PLERROR("Called Learner::load with an empty filename, while experiment_name is also empty. What file name am I supposed to use???? Anyway this method is DEPRECATED, you should call directly function PLearn::load(whatever_filename_you_want, the_object) ");
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::Learner::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Does the necessary operations to transform a shallow copy (this) into a deep copy by deep-copying all the members that need to be.

This needs to be overridden by every class that adds "complex" data members to the class, such as Vec, Mat, PP<Something>, etc. Typical implementation:

  void CLASS_OF_THIS::makeDeepCopyFromShallowCopy(CopiesMap& copies)
  {
      inherited::makeDeepCopyFromShallowCopy(copies);
      deepCopyField(complex_data_member1, copies);
      deepCopyField(complex_data_member2, copies);
      ...
  }
Parameters:
copiesA map used by the deep-copy mechanism to keep track of already-copied objects.

Reimplemented from PLearn::Object.

Reimplemented in PLearn::ConditionalDistribution, PLearn::ConditionalGaussianDistribution, PLearn::Distribution, PLearn::EmpiricalDistribution, PLearn::LocallyWeightedDistribution, and PLearn::NeuralNet.

Definition at line 89 of file Learner.cc.

References avg_objective, avgsq_objective, PLearn::deepCopyField(), test_costfuncs, and test_statistics.

Referenced by PLearn::NeuralNet::makeDeepCopyFromShallowCopy().

{
    Object::makeDeepCopyFromShallowCopy(copies);
    //Measurer::makeDeepCopyFromShallowCopy(copies);
    //deepCopyField(test_sets, copies);
    //deepCopyField(measurers, copies);
    deepCopyField(avg_objective, copies);
    deepCopyField(avgsq_objective, copies);
    deepCopyField(test_costfuncs, copies);
    deepCopyField(test_statistics, copies);
}

Here is the call graph for this function:

Here is the caller graph for this function:

bool PLearn::Learner::measure ( int  step,
const Vec costs 
) [virtual]

**** SUBCLASS WRITING: This method should be called by iterative training algorithm's train method after each training step (meaning of training step is learner-dependent) passing it the current step number and the costs relevant for the training process. Training must be stopped if the returned value is true: it indicates early-stopping criterion has been met. Default version writes step and costs to objectiveout stream at each step Default version also performs the tests specified by setTestDuringTrain every 'test_every' steps and decides upon early-stopping as specified by setEarlyStopping. Default version also calls the measure method of all measurers that have been declared for addition with appendMeasurer

This is the measure method from Measurer. You may override this method if you wish to measure other things during the training. In this case your method will probably want to call this default version (Learner::measure) as part of it.

Definition at line 407 of file Learner.cc.

References PLearn::abs(), basename(), best_step, each_cpu_saves_its_errors, earlystop_max_degradation, earlystop_max_degraded_steps, earlystop_min_improvement, earlystop_min_value, earlystop_minval, earlystop_previousval, earlystop_relative_changes, earlystop_save_best, earlystop_testresultindex, earlystop_testsetnum, PLearn::endl(), epoch(), epoch_, expdir, i, PLearn::join(), PLearn::TVec< T >::length(), load(), measurers, minibatch_size, n, outputResultLineToFile(), PLERROR, PLearn::PLMPI::rank, save(), save_at_every_epoch, save_objective, PLearn::TVec< T >::size(), PLearn::PLMPI::synchronized, test(), test_every, test_sets, PLearn::tostring(), trainObjectiveNames(), and vlog.

{
    earlystop_min_value /= minibatch_size;
    if (costs.length()<1)
        PLERROR("Learner::measure: costs.length_=%d should be >0", costs.length());
  
    //vlog << ">>> Now measuring for step " << step << " (costs = " << costs << " )" << endl; 

    //  if (objectiveout)
    //  objectiveout << setw(5) << step << "  " << costs << "\n";


    if (((!PLMPI::synchronized && each_cpu_saves_its_errors) || PLMPI::rank==0) && save_objective)
        outputResultLineToFile(basename()+".objective",costs,true,join(trainObjectiveNames()," "));

    bool muststop = false;

    if (((!PLMPI::synchronized && each_cpu_saves_its_errors) || PLMPI::rank==0) && save_at_every_epoch)
    {
        string fname  = basename()+".epoch"+tostring(epoch())+".psave";
        vlog << " >> Saving model in " << fname << endl;
        PLearn::save(fname, *this);
    }
    if ((test_every != 0) && (step%test_every==0))
    {
        int ntestsets = test_sets.size();
        Array<Vec> test_results(ntestsets);
        for (int n=0; n<ntestsets; n++) // looping over test sets
        {
            test_results[n] = test(test_sets[n]);
            if ((!PLMPI::synchronized && each_cpu_saves_its_errors) || PLMPI::rank==0)
                PLERROR("In Learner::measure - Aliases are gone, so am I !");
            // outputResultLineToFile(basename()+"."+test_sets[n]->getAlias()+".hist.results",test_results[n],true,
            //                           join(testResultsNames()," "));
        }

        if (ntestsets>0 && earlystop_testsetnum>=0) // are we doing early stopping?
        {
            real earlystop_currentval = 
                test_results[earlystop_testsetnum][earlystop_testresultindex];
            //  cout << earlystop_currentval << " " << earlystop_testsetnum << " " << earlystop_testresultindex << endl;
            // Check if early-stopping condition was met
            if ((earlystop_relative_changes &&
                 ((earlystop_currentval-earlystop_minval > 
                   earlystop_max_degradation * abs(earlystop_minval))
                  || (earlystop_currentval < earlystop_min_value)
                  || (earlystop_previousval-earlystop_currentval < 
                      earlystop_min_improvement * abs(earlystop_previousval)))) ||
                (!earlystop_relative_changes &&
                 ((earlystop_currentval-earlystop_minval > earlystop_max_degradation)
                  || (earlystop_currentval < earlystop_min_value)
                  || (earlystop_previousval-earlystop_currentval < 
                      earlystop_min_improvement))) ||
                (earlystop_max_degraded_steps>=0 &&
                 (step-best_step>=earlystop_max_degraded_steps) && 
                 (earlystop_minval < FLT_MAX)))
            { // earlystopping met
                if (earlystop_save_best)
                {
                    string fname  = basename()+".psave";
                    vlog << "Met early-stopping condition!" << endl;
                    vlog << "earlystop_currentval = " << earlystop_currentval << endl;
                    vlog << "earlystop_minval = " << earlystop_minval << endl;
                    vlog << "threshold = " << earlystop_max_degradation*earlystop_minval << endl;
                    vlog << "STOPPING (reloading best model)" << endl;
                    if(expdir.empty()) // old deprecated mode
                        load();
                    else
                        PLearn::load(fname,*this);          
                }
                else
                    cout << "Result for benchmark is: " << test_results << endl;
                muststop = true;
            }
            else // earlystopping not met
            {
                earlystop_previousval = earlystop_currentval;
                if (PLMPI::rank==0 && earlystop_save_best
                    && (earlystop_currentval < earlystop_minval))
                {
                    string fname  = basename()+".psave";
                    vlog << "saving model in " << fname <<  " because of earlystopping / improvement: " << endl;
                    vlog << "earlystop_currentval = " << earlystop_currentval << endl;
                    vlog << "earlystop_minval = " << earlystop_minval << endl;
                    PLearn::save(fname,*this);
                    // update .results file
                    if ((!PLMPI::synchronized && each_cpu_saves_its_errors) || PLMPI::rank==0)
                        PLERROR("In Learner::measure - Aliases are gone, so am I !");
                    /*
                      for (int n=0; n<ntestsets; n++) // looping over test sets
                      outputResultLineToFile(basename()+"."+test_sets[n]->getAlias()+".results",test_results[n],false,
                      join(testResultsNames()," "));
                    */
                    cout << "Result for benchmark is: " << test_results << endl;
                }
            }
            if (earlystop_currentval < earlystop_minval)
            {
                earlystop_minval = earlystop_currentval;
                best_step = step;
                if(PLMPI::rank==0)
                    vlog << "currently best step at " << best_step << " with " << earlystop_currentval << " " << test_results << endl;        
            }
        } 
        else
            // save tests in .results
            if ((!PLMPI::synchronized && each_cpu_saves_its_errors) || PLMPI::rank==0)
                PLERROR("In Learner::measure - Aliases are gone, so am I !");
        /*
          for (int n=0; n<ntestsets; n++) // looping over test sets
          outputResultLineToFile(basename()+"."+test_sets[n]->getAlias()+".results",test_results[n],false,
          join(testResultsNames()," "));
        */
    }

    for (int i=0; i<measurers.size(); i++)
        muststop = muststop || measurers[i]->measure(step,costs);

    ++epoch_;

// BUG: This doesn't work as intented in certain cases (ie. me!)
//#if USING_MPI
//MPI_Barrier(MPI_COMM_WORLD);
//#endif

    return muststop;
}

Here is the call graph for this function:

void PLearn::Learner::newtest ( VMat  testset,
VecStatsCollector test_stats,
VMat  testoutputs = 0,
VMat  testcosts = 0 
) [virtual]

Should perform test on testset, updating test cost statistics, and optionally filling testoutputs and testcosts.

Definition at line 1029 of file Learner.cc.

References PLERROR.

{
    PLERROR("Learner::newtrain not yet implemented");

/*
  int l = testset.length();
  VVec input;
  VVec target;
  VVec weight;

  Vec output(testoutputs ?outputsize() :0);
  Vec costs(costsize());

  testset->defineSizes(inputsize(),targetsize(),weightsize());

  test_stats.forget();

  for(int i=0; i<l; i++)
  {
  testset.getSample(i, input, target, weight);

  if(testoutputs)
  {
  computeOutputAndCosts(input, target, weight, output, costs);
  testoutputs->putOrAppendRow(i,output);
  }
  else // no need to compute outputs
  computeCosts(input, target, weight, costs);

  if(testcosts)
  testcosts->putOrAppendRow(i, costs);

  test_stats.update(costs);
  }

  test_stats.finalize();

*/
}
void PLearn::Learner::newtrain ( VecStatsCollector train_stats) [virtual]

*** SUBCLASS WRITING: *** Should do the actual training until epoch==nepochs and should call update on the stats with training costs measured on-line

Definition at line 1025 of file Learner.cc.

References PLERROR.

{ PLERROR("newtrain not yet implemented for this learner"); }
void PLearn::Learner::oldwrite ( ostream &  out) const [virtual]

*** SUBCLASS WRITING: *** This matched pair of Object functions needs to be redefined by sub-classes. They are used for saving/loading a model to memory or to file. However, subclasses can call this one to deal with the saving/loading of the following data fields: the current options and the early stopping parameters.

Definition at line 863 of file Learner.cc.

References earlystop_max_degradation, earlystop_max_degraded_steps, earlystop_min_improvement, earlystop_min_value, earlystop_relative_changes, earlystop_save_best, earlystop_testresultindex, earlystop_testsetnum, experiment_name, inputsize_, outputsize_, save_at_every_epoch, targetsize_, test_costfuncs, test_every, test_statistics, PLearn::writeField(), PLearn::writeFooter(), and PLearn::writeHeader().

{
    writeHeader(out,"Learner",1);
    writeField(out,"inputsize",inputsize_);
    writeField(out,"outputsize",outputsize_);
    writeField(out,"targetsize",targetsize_);
    writeField(out,"test_every",test_every); // recently added by senecal
    writeField(out,"earlystop_testsetnum",earlystop_testsetnum);
    writeField(out,"earlystop_testresultindex",earlystop_testresultindex);
    writeField(out,"earlystop_max_degradation",earlystop_max_degradation);
    writeField(out,"earlystop_min_value",earlystop_min_value);
    writeField(out,"earlystop_min_improvement",earlystop_min_improvement);
    writeField(out,"earlystop_relative_changes",earlystop_relative_changes);
    writeField(out,"earlystop_save_best",earlystop_save_best);
    writeField(out,"earlystop_max_degraded_steps",earlystop_max_degraded_steps);
    writeField(out,"save_at_every_epoch",save_at_every_epoch);
    writeField(out,"experiment_name",experiment_name);
    writeField(out,"test_costfuncs",test_costfuncs);
    writeField(out,"test_statistics",test_statistics);
    writeFooter(out,"Learner");
}

Here is the call graph for this function:

void PLearn::Learner::openTestResultsStreams ( ) [protected]

opens the files in append mode for writing the test results

Definition at line 325 of file Learner.cc.

References PLearn::endl(), freeTestResultsStreams(), PLearn::join(), n, PLERROR, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), test_results_streams, test_sets, and testResultsNames().

Referenced by getTestResultsStream().

{
    freeTestResultsStreams();
    int n = test_sets.size();
    test_results_streams.resize(n);
    for(int k=0; k<n; k++)
    {
        PLERROR("In Learner::openTestResultsStreams - Come on, do not use this class anymore, aliases are out-of-order");
        string filename = ""; // Dummy string to make the compiler happy.
        /*
          string alias = test_sets[k]->getAlias();
          // if(alias.empty())
          //   PLERROR("In Learner::openTestResultsStreams testset #%d has no defined alias",k);
          string filename = alias.empty() ? string("/dev/null") : expdir+alias+".results";
        */
        test_results_streams[k] = new ofstream(filename.c_str(), ios::out|ios::app);
        ostream& out = *test_results_streams[k];
        if(out.bad())
            PLERROR("In Learner::openTestResultsStreams could not open file %s for appending",filename.c_str());
        // norman: added WIN32 check
#if __GNUC__ < 3 && !defined(WIN32)
        if(out.tellp() == 0)
#else
            if(out.tellp() == streampos(0))
#endif
                out << "#: epoch " << join(testResultsNames()," ") << endl;
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::Learner::openTrainObjectiveStream ( ) [protected]

opens the train.objective file for appending in the expdir

Definition at line 299 of file Learner.cc.

References PLearn::endl(), expdir, PLearn::join(), PLERROR, train_objective_stream, and trainObjectiveNames().

Referenced by getTrainObjectiveStream().

{
    string filename = expdir.empty() ? string("/dev/null") : expdir+"train.objective";
    if(train_objective_stream)
        delete train_objective_stream;
    train_objective_stream = new ofstream(filename.c_str(),ios::out|ios::app);
    ostream& out = *train_objective_stream;
    if(out.bad())
        PLERROR("could not open file %s for appending",filename.c_str());
    // norman: added WIN32 check
#if __GNUC__ < 3 && !defined(WIN32)
    if(out.tellp()==0)
#else
        if(out.tellp() == streampos(0))
#endif
            out << "#  epoch | " << join(trainObjectiveNames()," | ") << endl;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::Learner::outputResultLineToFile ( const string &  filename,
const Vec results,
bool  append,
const string &  names 
) [protected]

output a test result line to a file

Definition at line 101 of file Learner.cc.

References PLearn::endl(), and epoch_.

Referenced by measure().

{
#if __GNUC__ < 3
    ofstream teststream(fname.c_str(),ios::out|(append?ios::app:0));
#else
    ofstream teststream(fname.c_str(),ios_base::out|(append?ios_base::app:static_cast<ios::openmode>(0)));
#endif
    // norman: added WIN32 check
#if __GNUC__ < 3 && !defined(WIN32)
    if(teststream.tellp()==0)
#else
        if(teststream.tellp() == streampos(0))
#endif
            teststream << "#: epoch " << names << endl;
    teststream << setw(5) << epoch_ << "  " << results << endl;
}

Here is the call graph for this function:

Here is the caller graph for this function:

int PLearn::Learner::outputsize ( ) const [inline]

Definition at line 407 of file Learner.h.

Referenced by apply(), applyAndComputeCosts(), PLearn::NeuralNet::build_(), computeCosts(), computeLeaveOneOutCosts(), and test().

{ return outputsize_; }

Here is the caller graph for this function:

void PLearn::Learner::save ( const PPath filename = "") const [virtual]

DEPRECATED. Call PLearn::save(filename, object) instead.

Reimplemented from PLearn::Object.

Definition at line 917 of file Learner.cc.

References experiment_name, force_saving_on_all_processes, PLERROR, PLearn::PLMPI::rank, and PLearn::Object::save().

Referenced by measure(), stop_if_wanted(), and PLearn::NeuralNet::train().

{
#if USING_MPI
    if (PLMPI::rank!=0 && !force_saving_on_all_processes)
        return;
#endif
    if(!filename.empty())
        Object::save(filename);
    else if(!experiment_name.empty())
        Object::save(experiment_name);
    else
        PLERROR("Called Learner::save with an empty filename, while experiment_name is also empty. What file name am I supposed to use???? Anyway this method is DEPRECATED, you should call directly function PLearn::save(whatever_filename_you_want, the_object) ");
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::Learner::setEarlyStopping ( int  which_testset,
int  which_testresult,
real  max_degradation,
real  min_value = -FLT_MAX,
real  min_improvement = 0,
bool  relative_changes = true,
bool  save_best = true,
int  max_degraded_steps = -1 
)

which_testset and which_testresult select the appropriate testset and costfunction to base early-stopping on from those that were specified in setTestDuringTrain degradation is the difference between the current value and the smallest value ever attained, training will be stopped if it grows beyond max_degradation training will be stopped if current value goes below min_value training will be stopped if difference between previous value and current value is below min_improvement if (relative_changes) is true then max_degradation is relative to the smallest value ever attained, and min_improvement is relative to the previous value. if (save_best) then save the lowest validation error model (with the write method, to memory), and if early stopping occurs reload this saved model (with the read method).

Definition at line 390 of file Learner.cc.

References earlystop_max_degradation, earlystop_max_degraded_steps, earlystop_min_improvement, earlystop_min_value, earlystop_minval, earlystop_previousval, earlystop_relative_changes, earlystop_save_best, earlystop_testresultindex, and earlystop_testsetnum.

Referenced by Learner().

{
    earlystop_testsetnum = which_testset;
    earlystop_testresultindex = which_testresult;
    earlystop_max_degradation = max_degradation;
    earlystop_min_value = min_value;
    earlystop_previousval = FLT_MAX;
    earlystop_minval = FLT_MAX;
    earlystop_relative_changes = relative_changes;
    earlystop_min_improvement = min_improvement;
    earlystop_save_best = save_best;
    earlystop_max_degraded_steps = max_degraded_steps;
}

Here is the caller graph for this function:

void PLearn::Learner::setExperimentDirectory ( const PPath the_expdir) [virtual]

The experiment directory is the directory in which files related to this model are to be saved.

Typically, the following files will be saved in that directory: model.psave (saved best model) model#.psave (model saved after epoch #) model#.<trainset_alias>.objective (training objective and costs after each epoch) model#.<testset_alias>.results (test results after each epoch)

Definition at line 219 of file Learner.cc.

References PLearn::PPath::absolute(), expdir, PLearn::force_mkdir(), PLERROR, and PLearn::PLMPI::rank.

{ 
#if USING_MPI
    if(PLMPI::rank==0) {
#endif
        if(!force_mkdir(the_expdir))
        {
            PLERROR("In Learner::setExperimentDirectory Could not create experiment directory %s",the_expdir.c_str());}
#if USING_MPI
    }
#endif
    expdir = the_expdir.absolute();
}

Here is the call graph for this function:

void PLearn::Learner::setModel ( const Vec new_options) [virtual]

** DEPRECATED ** Do not use! use the setOption and build methods instead

Definition at line 831 of file Learner.cc.

References PLERROR.

                                         { 
    PLERROR("setModel: method not implemented for this Learner (and DEPRECATED!!! DON'T IMPLEMENT IT, DON'T CALL IT. SEE setOption INSTEAD)"); 
}
void PLearn::Learner::setTestCostFunctions ( Array< CostFunc costfunctions) [inline]

Call this method to define what cost functions are computed by default (these are generic cost functions which compare the output with the target)

Definition at line 418 of file Learner.h.

Referenced by PLearn::Distribution::Distribution().

    { test_costfuncs = costfunctions; }

Here is the caller graph for this function:

void PLearn::Learner::setTestDuringTrain ( ostream &  testout,
int  every,
Array< VMat testsets 
) [virtual]

testout: the stream where the test results are to be written every: how often (number of iterations) the tests should be performed

Definition at line 291 of file Learner.cc.

References test_every, test_sets, and testout.

{
    // testout(&out);//testout = out;
    testout = new StdPStreamBuf(&out);
    test_every = every;
    test_sets = testsets;
}
void PLearn::Learner::setTestDuringTrain ( Array< VMat testsets) [virtual]

Definition at line 371 of file Learner.cc.

References test_sets.

{  test_sets = testsets; }
void PLearn::Learner::setTestStatistics ( StatsItArray  statistics) [inline]

This method defines what statistics are computed on the costs (which compute a vector of statistics that depend on all the test costs)

Definition at line 423 of file Learner.h.

Referenced by Learner().

    { test_statistics = statistics; }

Here is the caller graph for this function:

void PLearn::Learner::setTrainCost ( Vec cost) [inline, protected]

Definition at line 561 of file Learner.h.

Referenced by PLearn::NeuralNet::train().

    { train_cost.resize(cost.length()); train_cost << cost; };

Here is the caller graph for this function:

virtual void PLearn::Learner::setTrainingSet ( VMat  training_set) [inline, virtual]

Declare the train_set.

Definition at line 255 of file Learner.h.

Referenced by PLearn::Distribution::train(), and PLearn::NeuralNet::train().

{ train_set = training_set; }

Here is the caller graph for this function:

void PLearn::Learner::stop_if_wanted ( ) [virtual]

stopping condition, by default when a file named experiment_name + "_stop" is found to exist.

If that is the case then this file is removed and exit(0) is performed.

Definition at line 941 of file Learner.cc.

References basename(), PLearn::endl(), PLearn::isfile(), PLearn::PLMPI::rank, PLearn::Profiler::report(), save(), PLearn::tostring(), and vlog.

Referenced by test().

{
    string stopping_filename = basename()+".stop";
    if (isfile(stopping_filename))
    {
#ifdef PROFILE
        string profile_report_name = basename();
#if USING_MPI
        profile_report_name += "_r" + tostring(PLMPI::rank);;
#endif
        profile_report_name += ".profile";
        ofstream profile_report(profile_report_name.c_str());
        Profiler::report(profile_report);
#endif
#if USING_MPI
        MPI_Barrier(MPI_COMM_WORLD);
        if (PLMPI::rank==0)
        {
            string fname = basename()+".stopped.psave";
            PLearn::save(fname,*this);
            vlog << "saving and quitting because of stop signal" << endl;
            unlink(stopping_filename.c_str()); // remove file if possible
        }
        exit(0);
#else
        unlink(stopping_filename.c_str()); // remove file if possible
        exit(0);
#endif
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

int PLearn::Learner::targetsize ( ) const [inline]

Definition at line 406 of file Learner.h.

Referenced by PLearn::NeuralNet::build_(), computeLeaveOneOutCosts(), PLearn::Distribution::train(), and useAndCostOnTestVec().

{ return targetsize_; }

Here is the caller graph for this function:

Vec PLearn::Learner::test ( VMat  test_set,
const string &  save_test_outputs = "",
const string &  save_test_costs = "" 
) [virtual]

This function should work with and without MPI.

Return statistics computed by test_statistics on the test_costfuncs. If (save_test_outputs) then the test outputs are saved in the given file, and similary if (save_test_costs).

If parallelized at this level, only MPI process 0 will save to file and gather and compute statistics All other MPI processes will call useAndCost on different sections of the test_set

Definition at line 651 of file Learner.cc.

References PLearn::TmpFilenames::addFilename(), applyAndComputeCostsOnTestMat(), PLearn::binread(), PLearn::binwrite(), PLearn::StatsItArray::computeStats(), PLearn::concat(), costsize(), PLearn::TVec< T >::data(), dont_parallelize, PLearn::StatsItArray::finish(), PLearn::StatsItArray::getResults(), i, PLearn::StatsItArray::init(), PLearn::TVec< T >::length(), PLearn::VMat::length(), minibatch_size, outputsize(), PLearn::PLMPI::rank, PLearn::StatsItArray::requiresMultiplePasses(), PLearn::TVec< T >::resize(), PLearn::TMat< T >::resize(), PLearn::PLMPI::size, stop_if_wanted(), PLearn::PLMPI::synchronized, test_statistics, PLearn::StatsItArray::update(), use_file_if_bigger, useAndCostOnTestVec(), and vlog.

Referenced by measure().

{
    int ncostfuncs = costsize();

    Vec output(outputsize()*minibatch_size);
    Vec cost(ncostfuncs);
    Mat output_block(minibatch_size,outputsize());
    Mat cost_block(minibatch_size,outputsize());
    if (minibatch_size>1)
        cost_block.resize(minibatch_size,costsize());  

    Vec result;

    VMat outputs; // possibly where to save outputs (and target)
    VMat costs; // possibly where to save costs
    if(PLMPI::rank==0 && !save_test_outputs.empty())
        outputs = new FileVMatrix(save_test_outputs, test_set.length(), outputsize());

    if(PLMPI::rank==0 && !save_test_costs.empty())
        costs = new FileVMatrix(save_test_costs, test_set.length(), ncostfuncs);

    int l = test_set.length();
    ProgressBar progbar(vlog, "Testing this old deprecated Learner you should not be using anymore", l);
    //  + test_set->getAlias(), l); // Aliases are deprecated.
    // ProgressBar progbar(cerr, "Testing " + test_set->getAlias(), l);
    // ProgressBar progbar(nullout(), "Testing " + test_set->getAlias(), l);

    // Do the test statistics require multiple passes?
    bool multipass = test_statistics.requiresMultiplePasses(); 

    // If multiple passes are required, make sure we save the individual costs in an appropriate 'costs' VMat
    if (PLMPI::rank==0 && save_test_costs.empty() && multipass)
    {
        TmpFilenames tmpfile(1);
        bool save_on_file = ncostfuncs*test_set.length() > use_file_if_bigger;
        if (save_on_file)
            costs = new FileVMatrix(tmpfile.addFilename(),test_set.length(),ncostfuncs);
        else
            costs = Mat(test_set.length(),ncostfuncs);
    }

    if(!multipass) // stats can be computed in a single pass?
        test_statistics.init(ncostfuncs);

    if(USING_MPI && PLMPI::synchronized && !dont_parallelize && PLMPI::size>1)
    { // parallel implementation
      // cout << "PARALLEL-DATA TEST" << endl;
#if USING_MPI
        PLMPI::synchronized = false;
        if(PLMPI::rank==0) // process 0 gathers costs, computes statistics and writes stuff to output files if required
        {
            MPIStreams mpistreams(200,200);
//          MPI_Status status;
            for(int i=0; i<l; i++)
            {
                int pnum = 1 + i%(PLMPI::size-1);
                if(!save_test_outputs.empty()) // receive and save output
                {
//                  MPI_Recv(cost.data(), cost.length(), PLMPI_REAL, pnum, 0, MPI_COMM_WORLD, &status);
                    //cerr << "/ MPI #" << PLMPI::rank << " received " << cost.length() << " values from MPI #" << pnum << endl;
                    PLearn::binread(mpistreams[pnum], output);
                    outputs->putRow(i, output);
                }
/*              else // receive output and cost only
                {
                MPI_Recv(output.data(), output.length()+cost.length(), PLMPI_REAL, pnum, 0, MPI_COMM_WORLD, &status);
                //cerr << "/ MPI #" << PLMPI::rank << " received " << cost.length() << " values from MPI #" << pnum << endl;
                outputs->putRow(i,output);
                }*/
                // receive cost
                PLearn::binread(mpistreams[pnum], cost);
                if(costs) // save costs?
                    costs->putRow(i,cost);
                if(!multipass) // stats can be computed in a single pass?
                    test_statistics.update(cost);
                progbar(i);
            }
        }
        else // other processes compute output and cost on different rows of the test_set and send them to process 0
        {
            MPIStream mpistream(0,200,200); // stream to node 0
            int step = PLMPI::size-1;
            for(int i=PLMPI::rank-1; i<l; i+=step)
            {
                useAndCostOnTestVec(test_set, i, output, cost);
                // test_set->getRow(i, sample);
                // useAndCost(input,target,output,cost);
/*              if(save_test_outputs.empty()) // send only cost
                {
                //cerr << "/ MPI #" << PLMPI::rank << " sending " << cost.length() << " values to MPI #0" << endl;
                MPI_Send(cost.data(), cost.length(), PLMPI_REAL, 0, 0, MPI_COMM_WORLD);
                }
                else // send output and cost only
                {
                //cerr << "/ MPI #" << PLMPI::rank << " sending " << cost.length() << " values to MPI #0" << endl;
                MPI_Send(output.data(), output.length()+cost.length(), PLMPI_REAL, 0, 0, MPI_COMM_WORLD);
                }
                }
                }*/
                if(!save_test_outputs.empty()) // send output
                    PLearn::binwrite(mpistream, output);
                // send cost
                PLearn::binwrite(mpistream, cost);
            }
        }

        // Finalize statistics computation
        int result_len;
        if(PLMPI::rank==0) // process 0 finalizes stats computation and broadcasts them
        {
            if(!multipass)
            {
                test_statistics.finish();
                result = concat(test_statistics.getResults());
            }
            else    
                result = concat(test_statistics.computeStats(costs));
            result_len = result.length();
        }
        MPI_Bcast(&result_len, 1, MPI_INT, 0, MPI_COMM_WORLD);
        result.resize(result_len);
        MPI_Bcast(result.data(), result.length(), PLMPI_REAL, 0, MPI_COMM_WORLD);
        PLMPI::synchronized = true;
#endif
    }
    else // default sequential implementation
    {

        for (int i=0; i<l; i++)
        {
            if (i%10000<minibatch_size) stop_if_wanted();
            if (minibatch_size>1 && i+minibatch_size<l)
            {
                applyAndComputeCostsOnTestMat(test_set, i, output_block, cost_block);
                i+=minibatch_size;
                if(outputs) // save outputs?
                    outputs->putMat(i,0,output_block);
                if(costs) // save costs?
                    costs->putMat(i,0,cost_block);
                if(!multipass) // stats can be computed in a single pass?
                    test_statistics.update(cost_block);
            }
            else
            {
                useAndCostOnTestVec(test_set, i, output, cost);
                if(outputs) // save outputs?
                    outputs->putRow(i,output);
                if(costs) // save costs?
                    costs->putRow(i,cost);
                if(!multipass) // stats can be computed in a single pass?
                    test_statistics.update(cost);
            }
            // test_set->getRow(i, sample);
            // useAndCost(input, target, output, cost);

            progbar(i);

        }

        // Finalize statistics computation
        if(!multipass)
        {
            test_statistics.finish();
            result = concat(test_statistics.getResults());
        }
        else    
            result = concat(test_statistics.computeStats(costs));

    }

    return result;
}

Here is the call graph for this function:

Here is the caller graph for this function:

Array< string > PLearn::Learner::testResultsNames ( ) const [virtual]

returns an Array of strings for the names of the cost statistics returned by methods test and computeTestStatistics. Default version returns a cross product between the info() strings of test_statistics and the cost names returned by costNames()

Definition at line 846 of file Learner.cc.

References costNames(), i, j, PLearn::TVec< T >::size(), PLearn::space_to_underscore(), and test_statistics.

Referenced by openTestResultsStreams(), PLearn::prettyprint_test_results(), and trainObjectiveNames().

{
    Array<string> cost_names = costNames();
    Array<string> names(test_statistics.size()*cost_names.size());
    int k=0;
    for (int i=0;i<test_statistics.size();i++)
    {
        string stati = test_statistics[i]->info();
        for (int j=0;j<cost_names.size();j++)
            names[k++] = space_to_underscore(cost_names[j] + "." + stati);
    }
    return names;
}

Here is the call graph for this function:

Here is the caller graph for this function:

virtual void PLearn::Learner::train ( VMat  training_set) [pure virtual]

*** SUBCLASS WRITING: *** Does the actual training. Subclasses must implement this method. The method should upon entry, call setTrainingSet(training_set); Make sure that a if(measure(step, objective_value)) is done after each training step, and that training is stopped if it returned true

Implemented in PLearn::ConditionalGaussianDistribution, PLearn::Distribution, PLearn::EmpiricalDistribution, PLearn::LocallyWeightedDistribution, PLearn::NeuralNet, and PLearn::GraphicalBiText.

Referenced by computeLeaveOneOutCosts().

Here is the caller graph for this function:

virtual void PLearn::Learner::train ( VMat  training_set,
VMat  accept_prob,
real  max_accept_prob = 1.0,
VMat  weights = VMat() 
) [inline, virtual]

*** SUBCLASS WRITING: *** Does the actual training. Permit to train from a sampling of a training set.

Definition at line 293 of file Learner.h.

References PLERROR.

    { PLERROR("This method is not implemented for this learner"); }
Array< string > PLearn::Learner::trainObjectiveNames ( ) const [virtual]

returns an array of strings corresponding to the names of the fields that will be written to objectiveout (by default this calls testResultsNames() )

Definition at line 860 of file Learner.cc.

References testResultsNames().

Referenced by measure(), and openTrainObjectiveStream().

{ return testResultsNames(); }

Here is the call graph for this function:

Here is the caller graph for this function:

virtual void PLearn::Learner::use ( const Vec input,
Vec output 
) [pure virtual]

*** SUBCLASS WRITING: *** Uses a trained decider on input, filling output. If the cost should also be computed, then the user should call useAndCost instead of this method.

Referenced by apply(), computeLeaveOneOutCosts(), PLearn::TopDownAsymetricDeepNetwork::computeOutput(), and PLearn::NeuralNet::useAndCost().

Here is the caller graph for this function:

virtual void PLearn::Learner::use ( const Mat inputs,
Mat  outputs 
) [inline, virtual]

Definition at line 303 of file Learner.h.

References i, PLearn::TMat< T >::length(), and PLearn::use().

    { 
        for (int i=0;i<inputs.length();i++) 
        {
            Vec input = inputs(i);
            Vec output = outputs(i);
            use(input,output);
        }
    }

Here is the call graph for this function:

virtual void PLearn::Learner::useAndCost ( const Vec input,
const Vec target,
Vec  output,
Vec  cost 
) [virtual]

By default this function calls use(input, output) and then computeCost(input, target, output, cost) So you can overload computeCost to change cost computation.

Referenced by useAndCostOnTestVec().

Here is the caller graph for this function:

void PLearn::Learner::useAndCostOnTestVec ( const VMat test_set,
int  i,
const Vec output,
const Vec cost 
) [virtual]

Default version calls useAndCost on test_set(i) so you don't need to overload this method unless you want to provide a more efficient implementation (for ex.

if you have precomputed things for the test_set that you can use).

Definition at line 254 of file Learner.cc.

References inputsize(), j, minibatch_size, PLearn::TVec< T >::resize(), PLearn::TVec< T >::subVec(), targetsize(), tmpvec, useAndCost(), and PLearn::VMat::width().

Referenced by applyAndComputeCosts(), computeCosts(), computeLeaveOneOutCosts(), and test().

{
    tmpvec.resize(test_set.width());
    if (minibatch_size > 1)
    {
        Vec inputvec(inputsize()*minibatch_size);
        Vec targetvec(targetsize()*minibatch_size);
        for (int k=0; k<minibatch_size;k++)
        {      
            test_set->getRow(i+k,tmpvec);
            for (int j=0; j<inputsize(); j++)
                inputvec[k*inputsize()+j] = tmpvec[j];
            for (int j=0; j<targetsize(); j++)
                targetvec[k*targetsize()+j] = tmpvec[inputsize()+j];
        }
        useAndCost(inputvec, targetvec, output, cost);
    }
    else
    {
        test_set->getRow(i,tmpvec);
        useAndCost(tmpvec.subVec(0,inputsize()), tmpvec.subVec(inputsize(),targetsize()), output, cost);
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

int PLearn::Learner::weightsize ( ) const [inline]

Definition at line 408 of file Learner.h.

Referenced by PLearn::NeuralNet::build_().

{ return weightsize_; }

Here is the caller graph for this function:


Member Data Documentation

average of the objective function(s) over the last test_every steps

Definition at line 147 of file Learner.h.

Referenced by makeDeepCopyFromShallowCopy().

average of the squared objective function(s) over the last test_every steps

Definition at line 148 of file Learner.h.

Referenced by makeDeepCopyFromShallowCopy().

the step (usually epoch) at which validation cost was best

Definition at line 175 of file Learner.h.

Referenced by measure().

This is set to true to indicate that MPI parallelization occured at the level of this learner possibly with data distributed across several nodes (in which case PLMPI::synchronized should be false) (this is initially false)

Definition at line 124 of file Learner.h.

By default, MPI parallelization done at given level prevents further parallelization at lower levels.

If true, this means "don't parallelize processing at this level" (default: false)

Definition at line 141 of file Learner.h.

Referenced by declareOptions(), and test().

Definition at line 194 of file Learner.h.

Referenced by measure().

maximum degradation in error from last best value

Definition at line 166 of file Learner.h.

Referenced by declareOptions(), measure(), oldwrite(), and setEarlyStopping().

max. nb of steps beyond best found [in version >= 1]

Definition at line 171 of file Learner.h.

Referenced by declareOptions(), measure(), oldwrite(), and setEarlyStopping().

minimum improvement in error otherwise we stop

Definition at line 168 of file Learner.h.

Referenced by declareOptions(), measure(), oldwrite(), and setEarlyStopping().

minimum error beyond which we stop

Definition at line 167 of file Learner.h.

Referenced by declareOptions(), measure(), oldwrite(), and setEarlyStopping().

Definition at line 181 of file Learner.h.

Referenced by build_(), forget(), measure(), and setEarlyStopping().

temporary values relevant for early stopping

Definition at line 179 of file Learner.h.

Referenced by build_(), forget(), measure(), and setEarlyStopping().

are max_degradation and min_improvement relative?

Definition at line 169 of file Learner.h.

Referenced by declareOptions(), measure(), oldwrite(), and setEarlyStopping().

if yes, then return with saved "best" model

Definition at line 170 of file Learner.h.

Referenced by declareOptions(), measure(), oldwrite(), and setEarlyStopping().

index of statistic (as returned by test) to use

Definition at line 165 of file Learner.h.

Referenced by declareOptions(), measure(), oldwrite(), and setEarlyStopping().

early-stopping parameters

index of test set (in test_sets) to use for early stopping

Definition at line 164 of file Learner.h.

Referenced by declareOptions(), measure(), oldwrite(), and setEarlyStopping().

It's used as part of the model filename saved by calling save(), which measure() does if ??? incomplete ???

Definition at line 119 of file Learner.h.

Referenced by forget(), measure(), and outputResultLineToFile().

string PLearn::Learner::expdir [protected]

the directory in which to save files related to this model (see setExperimentDirectory()) You may assume that it ends with a slash (setExperimentDirectory(...) ensures this).

Definition at line 116 of file Learner.h.

Referenced by basename(), declareOptions(), measure(), openTrainObjectiveStream(), setExperimentDirectory(), and PLearn::NeuralNet::train().

Definition at line 184 of file Learner.h.

Referenced by basename(), load(), oldwrite(), and save().

than an in-memory one should be used (when computing statistics requiring multiple passes over a test set)

otherwise in MPI only CPU0 actually saves

Definition at line 203 of file Learner.h.

Referenced by save().

Definition at line 192 of file Learner.h.

Referenced by Learner().

array of measurers:

Definition at line 190 of file Learner.h.

Referenced by measure().

test by blocks of this size using apply rather than use

Definition at line 151 of file Learner.h.

Referenced by applyAndComputeCosts(), computeCosts(), declareOptions(), Learner(), measure(), test(), and useAndCostOnTestVec().

The log stream to use to record the objective function during training.

Definition at line 209 of file Learner.h.

DEPRECATED options in the construction of the model through setModel.

Definition at line 161 of file Learner.h.

the use() method produces an output vector of size outputsize().

Definition at line 136 of file Learner.h.

Referenced by PLearn::Distribution::build_(), declareOptions(), and oldwrite().

report test progress in vlog (see below) every that many iterations For each nth test sample, this will print a "Test sample #n" line in vlog (where n is the value in report_test_progress_every)

Definition at line 157 of file Learner.h.

Referenced by Learner().

save learner at each epoch?

Definition at line 173 of file Learner.h.

Referenced by declareOptions(), measure(), and oldwrite().

whether to save in basename()+".objective" the cost after each measure (e.g. after each epoch)

Definition at line 174 of file Learner.h.

Referenced by declareOptions(), and measure().

columns followed by targetsize() columns.

Definition at line 135 of file Learner.h.

Referenced by declareOptions(), oldwrite(), and PLearn::EmpiricalDistribution::train().

Definition at line 146 of file Learner.h.

Referenced by declareOptions(), Learner(), measure(), oldwrite(), and setTestDuringTrain().

opened streams where to save test results

Definition at line 81 of file Learner.h.

Referenced by freeTestResultsStreams(), getTestResultsStream(), and openTestResultsStreams().

test sets to test on during train

Definition at line 150 of file Learner.h.

Referenced by measure(), openTestResultsStreams(), and setTestDuringTrain().

test during train specifications

Definition at line 145 of file Learner.h.

Referenced by setTestDuringTrain().

Vec PLearn::Learner::tmp_costs [static, private]

Definition at line 89 of file Learner.h.

Vec PLearn::Learner::tmp_input [static, private]
Vec PLearn::Learner::tmp_output [static, private]

Definition at line 88 of file Learner.h.

Vec PLearn::Learner::tmp_target [static, private]
Vec PLearn::Learner::tmp_weight [static, private]

Definition at line 77 of file Learner.h.

Referenced by useAndCostOnTestVec().

Definition at line 562 of file Learner.h.

file stream where to save objecties and costs during training

Definition at line 80 of file Learner.h.

Referenced by getTrainObjectiveStream(), openTrainObjectiveStream(), and ~Learner().

the current set being used for training

Definition at line 149 of file Learner.h.

Referenced by basename().

number of elements above which a file VMatrix rather

Definition at line 199 of file Learner.h.

Referenced by test().

**Next generation** learners allow inputs to be anything, not just Vec

Definition at line 314 of file Learner.h.

The log stream to which all the verbose output from this learner should be sent.

Definition at line 208 of file Learner.h.

Referenced by computeLeaveOneOutCosts(), Learner(), measure(), stop_if_wanted(), and test().

number of weight fields in the target vec (all_targets = actual_target & weights)

Definition at line 137 of file Learner.h.

Referenced by declareOptions().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines