PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Static Protected Member Functions | Private Types | Private Member Functions | Private Attributes
PLearn::ProcessInputCostModule Class Reference

Processes the input through an embedded OnlineLearningModule. More...

#include <ProcessInputCostModule.h>

Inheritance diagram for PLearn::ProcessInputCostModule:
Inheritance graph
[legend]
Collaboration diagram for PLearn::ProcessInputCostModule:
Collaboration graph
[legend]

List of all members.

Public Member Functions

 ProcessInputCostModule ()
 Default constructor.
virtual void fprop (const Vec &input, const Vec &target, real &cost) const
 Given the input and the target, compute only the first cost (of which we will compute the gradient)
virtual void fprop (const Mat &inputs, const Mat &targets, Vec &costs)
 Minibatch version.
virtual void fprop (const Vec &input, const Vec &target, Vec &cost) const
 Given the input and the target, compute a vector of costs (possibly resize it appropriately)
virtual void fprop (const Mat &inputs, const Mat &targets, Mat &costs) const
 Minibatch version.
virtual void bpropUpdate (const Vec &input, const Vec &target, real cost, Vec &input_gradient, bool accumulate=false)
 Adapt based on the cost, and compute input gradient to backpropagate.
virtual void bpropUpdate (const Mat &inputs, const Mat &targets, const Vec &costs, Mat &input_gradients, bool accumulate=false)
 Minibatch version.
virtual void bbpropUpdate (const Vec &input, const Vec &target, real cost, Vec &input_gradient, Vec &input_diag_hessian, bool accumulate=false)
 Similar to bpropUpdate, but adapt based also on the estimation of the diagonal of the Hessian matrix, and propagates this back.
virtual void forget ()
 Reset the parameters to the state they would be BEFORE starting training.
virtual void finalize ()
 Perform some processing after training, or after a series of fprop/bpropUpdate calls to prepare the model for truly out-of-sample operation.
virtual bool bpropDoesNothing ()
 In case bpropUpdate does not do anything, make it known.
virtual void setLearningRate (real dynamic_learning_rate)
 If this class has a learning rate (or something close to it), set it.
virtual TVec< string > costNames ()
 Indicates the name of the computed costs.
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual ProcessInputCostModuledeepCopy (CopiesMap &copies) const
virtual void build ()
 Post-constructor.
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.

Static Public Member Functions

static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

PP< OnlineLearningModuleprocessing_module
 Module that processes the input.
PP< CostModulecost_module
 CostModule that outputs this cost.
int processed_size
 Size of processing_module's output.

Static Public Attributes

static StaticInitializer _static_initializer_

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares the class options.

Private Types

typedef CostModule inherited

Private Member Functions

void build_ ()
 This does the actual building.

Private Attributes

Vec processed_value
Mat processed_values
Vec processed_gradient
Mat processed_gradients
Vec processed_diag_hessian
Mat processed_diag_hessians

Detailed Description

Processes the input through an embedded OnlineLearningModule.

This Module embeds an OnlineLearningModule, processing_module, and a CostModule, cost_module. The input goes through processing_module, the output of which is used as input by the CostModule. If you want the input to go through several processing steps, you can use a ModuleStackModule as processing_module.

Todo:
: code ModuleStackModule

Definition at line 57 of file ProcessInputCostModule.h.


Member Typedef Documentation

Reimplemented from PLearn::CostModule.

Definition at line 59 of file ProcessInputCostModule.h.


Constructor & Destructor Documentation

PLearn::ProcessInputCostModule::ProcessInputCostModule ( )

Default constructor.

Definition at line 56 of file ProcessInputCostModule.cc.

                                               :
    processed_size( -1 )
{
}

Member Function Documentation

string PLearn::ProcessInputCostModule::_classname_ ( ) [static]

Reimplemented from PLearn::CostModule.

Definition at line 54 of file ProcessInputCostModule.cc.

OptionList & PLearn::ProcessInputCostModule::_getOptionList_ ( ) [static]

Reimplemented from PLearn::CostModule.

Definition at line 54 of file ProcessInputCostModule.cc.

RemoteMethodMap & PLearn::ProcessInputCostModule::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::CostModule.

Definition at line 54 of file ProcessInputCostModule.cc.

bool PLearn::ProcessInputCostModule::_isa_ ( const Object o) [static]

Reimplemented from PLearn::CostModule.

Definition at line 54 of file ProcessInputCostModule.cc.

Object * PLearn::ProcessInputCostModule::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::CostModule.

Definition at line 54 of file ProcessInputCostModule.cc.

StaticInitializer ProcessInputCostModule::_static_initializer_ & PLearn::ProcessInputCostModule::_static_initialize_ ( ) [static]

Reimplemented from PLearn::CostModule.

Definition at line 54 of file ProcessInputCostModule.cc.

void PLearn::ProcessInputCostModule::bbpropUpdate ( const Vec input,
const Vec target,
real  cost,
Vec input_gradient,
Vec input_diag_hessian,
bool  accumulate = false 
) [virtual]

Similar to bpropUpdate, but adapt based also on the estimation of the diagonal of the Hessian matrix, and propagates this back.

If these methods are defined, you can use them INSTEAD of bpropUpdate(...)

Reimplemented from PLearn::CostModule.

Definition at line 246 of file ProcessInputCostModule.cc.

References cost_module, PLearn::OnlineLearningModule::input_size, PLASSERT, PLASSERT_MSG, processed_diag_hessian, processed_gradient, processed_value, processing_module, PLearn::TVec< T >::size(), and PLearn::CostModule::target_size.

{
    PLASSERT( processing_module );
    PLASSERT( cost_module );
    PLASSERT( input.size() == input_size );
    PLASSERT( target.size() == target_size );

    if( accumulate )
    {
        PLASSERT_MSG( input_gradient.size() == input_size,
                      "Cannot resize input_gradient AND accumulate into it" );
        PLASSERT_MSG( input_diag_hessian.size() == input_size,
                      "Cannot resize input_diag_hessian AND accumulate into it"
                    );
    }

    cost_module->bbpropUpdate( processed_value, target, cost,
                               processed_gradient, processed_diag_hessian );
    processing_module->bbpropUpdate( input, processed_value,
                                     input_gradient, processed_gradient,
                                     input_diag_hessian,
                                     processed_diag_hessian,
                                     accumulate );
}

Here is the call graph for this function:

bool PLearn::ProcessInputCostModule::bpropDoesNothing ( ) [virtual]

In case bpropUpdate does not do anything, make it known.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 332 of file ProcessInputCostModule.cc.

References cost_module, and processing_module.

{
    return processing_module->bpropDoesNothing()
        && cost_module->bpropDoesNothing();
}
void PLearn::ProcessInputCostModule::bpropUpdate ( const Vec input,
const Vec target,
real  cost,
Vec input_gradient,
bool  accumulate = false 
) [virtual]

Adapt based on the cost, and compute input gradient to backpropagate.

Reimplemented from PLearn::CostModule.

Definition at line 195 of file ProcessInputCostModule.cc.

References cost_module, PLearn::OnlineLearningModule::input_size, PLASSERT, PLASSERT_MSG, processed_gradient, processed_value, processing_module, PLearn::TVec< T >::size(), and PLearn::CostModule::target_size.

{
    PLASSERT( processing_module );
    PLASSERT( cost_module );
    PLASSERT( input.size() == input_size );
    PLASSERT( target.size() == target_size );

    if( accumulate )
    {
        PLASSERT_MSG( input_gradient.size() == input_size,
                      "Cannot resize input_gradient AND accumulate into it" );
    }

    cost_module->bpropUpdate( processed_value, target, cost,
                              processed_gradient );
    processing_module->bpropUpdate( input, processed_value,
                                    input_gradient, processed_gradient,
                                    accumulate );
}

Here is the call graph for this function:

void PLearn::ProcessInputCostModule::bpropUpdate ( const Mat inputs,
const Mat targets,
const Vec costs,
Mat input_gradients,
bool  accumulate = false 
) [virtual]

Minibatch version.

Reimplemented from PLearn::CostModule.

Definition at line 217 of file ProcessInputCostModule.cc.

References cost_module, PLearn::OnlineLearningModule::input_size, PLearn::TMat< T >::length(), PLASSERT, PLASSERT_MSG, processed_gradients, processed_values, processing_module, PLearn::TVec< T >::size(), PLearn::CostModule::target_size, and PLearn::TMat< T >::width().

{
    PLASSERT( processing_module );
    PLASSERT( cost_module );
    PLASSERT( inputs.width() == input_size );
    PLASSERT( targets.width() == target_size );
    PLASSERT( inputs.length() == targets.length() );
    PLASSERT( inputs.length() == costs.size() );

    if( accumulate )
    {
        PLASSERT_MSG( input_gradients.width() == input_size
                      && input_gradients.length() == inputs.length(),
                      "Cannot resize input_gradient AND accumulate into it" );
    }

    cost_module->bpropUpdate( processed_values, targets, costs,
                              processed_gradients );
    processing_module->bpropUpdate( inputs, processed_values,
                                    input_gradients, processed_gradients,
                                    accumulate );
}

Here is the call graph for this function:

void PLearn::ProcessInputCostModule::build ( ) [virtual]

Post-constructor.

The normal implementation should call simply inherited::build(), then this class's build_(). This method should be callable again at later times, after modifying some option fields to change the "architecture" of the object.

Reimplemented from PLearn::CostModule.

Definition at line 116 of file ProcessInputCostModule.cc.

References PLearn::CostModule::build(), and build_().

Here is the call graph for this function:

void PLearn::ProcessInputCostModule::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::CostModule.

Definition at line 86 of file ProcessInputCostModule.cc.

References cost_module, PLearn::OnlineLearningModule::input_size, PLearn::OnlineLearningModule::output_size, PLASSERT, processed_size, processing_module, PLearn::OnlineLearningModule::random_gen, and PLearn::CostModule::target_size.

Referenced by build().

{
    if( processing_module )
    {
        input_size = processing_module->input_size;
        processed_size = processing_module->output_size;
        // If we have a random_gen and processing_module does not, share it
        if( random_gen && !(processing_module->random_gen) )
        {
            processing_module->random_gen = random_gen;
            processing_module->forget();
        }
    }

    if( cost_module )
    {
        output_size = cost_module->output_size;
        target_size = cost_module->target_size;
        // If we have a random_gen and cost_module does not, share it
        if( random_gen && !(cost_module->random_gen) )
        {
            cost_module->random_gen = random_gen;
            cost_module->forget();
        }
    }

    if( processing_module && cost_module )
        PLASSERT( processed_size == cost_module->input_size );
}

Here is the caller graph for this function:

string PLearn::ProcessInputCostModule::classname ( ) const [virtual]

Reimplemented from PLearn::CostModule.

Definition at line 54 of file ProcessInputCostModule.cc.

Referenced by costNames().

Here is the caller graph for this function:

TVec< string > PLearn::ProcessInputCostModule::costNames ( ) [virtual]

Indicates the name of the computed costs.

Reimplemented from PLearn::CostModule.

Definition at line 305 of file ProcessInputCostModule.cc.

References classname(), cost_module, i, and PLearn::OnlineLearningModule::name.

{
    if (name == "" || name == classname())
        return cost_module->costNames();
    else
    {
        int n_costs = cost_module->costNames().length();
        TVec<string> cost_names(n_costs);
        for (int i=0; i<n_costs; i++)
            cost_names[i] = name + "." + cost_module->costNames()[i];

        return cost_names;
    }
}

Here is the call graph for this function:

void PLearn::ProcessInputCostModule::declareOptions ( OptionList ol) [static, protected]

Declares the class options.

Reimplemented from PLearn::CostModule.

Definition at line 61 of file ProcessInputCostModule.cc.

References PLearn::OptionBase::buildoption, cost_module, PLearn::declareOption(), PLearn::CostModule::declareOptions(), PLearn::OptionBase::learntoption, processed_size, and processing_module.

{
    // declareOption(ol, "myoption", &ProcessInputCostModule::myoption,
    //               OptionBase::buildoption,
    //               "Help text describing this option");

    declareOption(ol, "processing_module",
                  &ProcessInputCostModule::processing_module,
                  OptionBase::buildoption,
                  "Module that processes the input");

    declareOption(ol, "cost_module",
                  &ProcessInputCostModule::cost_module,
                  OptionBase::buildoption,
                  "Module that outputs the cost");

    declareOption(ol, "processed_size",
                  &ProcessInputCostModule::processed_size,
                  OptionBase::learntoption,
                  "Size of processing_module's output");

    // Now call the parent class' declareOptions
    inherited::declareOptions(ol);
}

Here is the call graph for this function:

static const PPath& PLearn::ProcessInputCostModule::declaringFile ( ) [inline, static]

Reimplemented from PLearn::CostModule.

Definition at line 155 of file ProcessInputCostModule.h.

:
    //#####  Protected Member Functions  ######################################
ProcessInputCostModule * PLearn::ProcessInputCostModule::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::CostModule.

Definition at line 54 of file ProcessInputCostModule.cc.

void PLearn::ProcessInputCostModule::finalize ( ) [virtual]

Perform some processing after training, or after a series of fprop/bpropUpdate calls to prepare the model for truly out-of-sample operation.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 323 of file ProcessInputCostModule.cc.

References cost_module, and processing_module.

{
    processing_module->finalize();
    cost_module->finalize();
}
void PLearn::ProcessInputCostModule::forget ( ) [virtual]

Reset the parameters to the state they would be BEFORE starting training.

Note that this method is necessarily called from build().

Reimplemented from PLearn::CostModule.

Definition at line 278 of file ProcessInputCostModule.cc.

References PLearn::TVec< T >::clear(), cost_module, PLASSERT, PLWARNING, processed_diag_hessian, processed_gradient, processed_value, processing_module, and PLearn::OnlineLearningModule::random_gen.

{
    PLASSERT( processing_module );
    PLASSERT( cost_module );

    processed_value.clear();
    processed_gradient.clear();
    processed_diag_hessian.clear();

    if( !random_gen )
    {
        PLWARNING("CombiningCostsModule: cannot forget() without random_gen");
        return;
    }

    // Ensures processing_module and cost_module can forget
    if( !(processing_module->random_gen) )
        processing_module->random_gen = random_gen;
    processing_module->forget();
    if( !(cost_module->random_gen) )
        cost_module->random_gen = random_gen;
    cost_module->forget();
}

Here is the call graph for this function:

void PLearn::ProcessInputCostModule::fprop ( const Mat inputs,
const Mat targets,
Mat costs 
) const [virtual]

Minibatch version.

Reimplemented from PLearn::CostModule.

Definition at line 177 of file ProcessInputCostModule.cc.

References cost_module, PLearn::OnlineLearningModule::input_size, PLearn::TMat< T >::length(), PLASSERT, processed_values, processing_module, PLearn::CostModule::target_size, and PLearn::TMat< T >::width().

{
    PLASSERT( processing_module );
    PLASSERT( cost_module );
    PLASSERT( inputs.width() == input_size );
    PLASSERT( targets.width() == target_size );
    PLASSERT( inputs.length() == targets.length() );

    processing_module->fprop( inputs, processed_values );
    cost_module->fprop( processed_values, targets, costs );
}

Here is the call graph for this function:

void PLearn::ProcessInputCostModule::fprop ( const Vec input,
const Vec target,
Vec cost 
) const [virtual]

Given the input and the target, compute a vector of costs (possibly resize it appropriately)

Reimplemented from PLearn::CostModule.

Definition at line 165 of file ProcessInputCostModule.cc.

References cost_module, PLearn::OnlineLearningModule::input_size, PLASSERT, processed_value, processing_module, PLearn::TVec< T >::size(), and PLearn::CostModule::target_size.

{
    PLASSERT( processing_module );
    PLASSERT( cost_module );
    PLASSERT( input.size() == input_size );
    PLASSERT( target.size() == target_size );

    processing_module->fprop( input, processed_value );
    cost_module->fprop( processed_value, target, cost );
}

Here is the call graph for this function:

void PLearn::ProcessInputCostModule::fprop ( const Mat inputs,
const Mat targets,
Vec costs 
) [virtual]

Minibatch version.

Reimplemented from PLearn::CostModule.

Definition at line 152 of file ProcessInputCostModule.cc.

References cost_module, PLearn::OnlineLearningModule::input_size, PLearn::TMat< T >::length(), PLASSERT, processed_values, processing_module, PLearn::CostModule::target_size, and PLearn::TMat< T >::width().

{
    PLASSERT( processing_module );
    PLASSERT( cost_module );
    PLASSERT( inputs.width() == input_size );
    PLASSERT( targets.width() == target_size );
    PLASSERT( inputs.length() == targets.length() );

    processing_module->fprop( inputs, processed_values );
    cost_module->fprop( processed_values, targets, costs );
}

Here is the call graph for this function:

void PLearn::ProcessInputCostModule::fprop ( const Vec input,
const Vec target,
real cost 
) const [virtual]

Given the input and the target, compute only the first cost (of which we will compute the gradient)

Reimplemented from PLearn::CostModule.

Definition at line 140 of file ProcessInputCostModule.cc.

References cost_module, PLearn::OnlineLearningModule::input_size, PLASSERT, processed_value, processing_module, PLearn::TVec< T >::size(), and PLearn::CostModule::target_size.

{
    PLASSERT( processing_module );
    PLASSERT( cost_module );
    PLASSERT( input.size() == input_size );
    PLASSERT( target.size() == target_size );

    processing_module->fprop( input, processed_value );
    cost_module->fprop( processed_value, target, cost );
}

Here is the call graph for this function:

OptionList & PLearn::ProcessInputCostModule::getOptionList ( ) const [virtual]

Reimplemented from PLearn::CostModule.

Definition at line 54 of file ProcessInputCostModule.cc.

OptionMap & PLearn::ProcessInputCostModule::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::CostModule.

Definition at line 54 of file ProcessInputCostModule.cc.

RemoteMethodMap & PLearn::ProcessInputCostModule::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::CostModule.

Definition at line 54 of file ProcessInputCostModule.cc.

void PLearn::ProcessInputCostModule::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]
void PLearn::ProcessInputCostModule::setLearningRate ( real  dynamic_learning_rate) [virtual]

If this class has a learning rate (or something close to it), set it.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 341 of file ProcessInputCostModule.cc.

References cost_module, and processing_module.

{
    processing_module->setLearningRate( dynamic_learning_rate );
    cost_module->setLearningRate( dynamic_learning_rate );
}

Member Data Documentation

Reimplemented from PLearn::CostModule.

Definition at line 155 of file ProcessInputCostModule.h.

Definition at line 185 of file ProcessInputCostModule.h.

Referenced by bbpropUpdate(), forget(), and makeDeepCopyFromShallowCopy().

Definition at line 186 of file ProcessInputCostModule.h.

Referenced by makeDeepCopyFromShallowCopy().

Definition at line 184 of file ProcessInputCostModule.h.

Referenced by bpropUpdate(), and makeDeepCopyFromShallowCopy().

Size of processing_module's output.

Definition at line 71 of file ProcessInputCostModule.h.

Referenced by build_(), and declareOptions().

Definition at line 182 of file ProcessInputCostModule.h.

Referenced by bpropUpdate(), fprop(), and makeDeepCopyFromShallowCopy().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines