PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Static Protected Member Functions | Protected Attributes | Private Types | Private Member Functions
PLearn::CostModule Class Reference

General class representing a cost function module. More...

#include <CostModule.h>

Inheritance diagram for PLearn::CostModule:
Inheritance graph
[legend]
Collaboration diagram for PLearn::CostModule:
Collaboration graph
[legend]

List of all members.

Public Member Functions

 CostModule ()
 Default constructor.
virtual void fprop (const Vec &input, const Vec &target, real &cost) const
 given the input and target, compute the main output (cost)
virtual void fprop (const Mat &inputs, const Mat &targets, Vec &costs)
 Mini-batch version.
virtual void fprop (const Vec &input, const Vec &target, Vec &cost) const
 this version allows for several costs
virtual void fprop (const Mat &inputs, const Mat &targets, Mat &costs) const
 Mini-batch version with several costs..
virtual void fprop (const Vec &input_and_target, Vec &output) const
 this version is provided for compatibility with the parent class
virtual void bpropUpdate (const Vec &input, const Vec &target, real cost, Vec &input_gradient, bool accumulate=false)
 Adapt based on the cost gradient, and obtain the input gradient.
virtual void bpropUpdate (const Mat &inputs, const Mat &targets, const Vec &costs, Mat &input_gradients, bool accumulate=false)
 Adapt based on the mini-batch cost gradient, and obtain the mini-batch input gradient.
virtual void bpropUpdate (const Vec &input, const Vec &target, real cost)
 Without the input gradient.
virtual void bpropUpdate (const Mat &inputs, const Mat &targets, const Vec &costs)
virtual void bpropUpdate (const Vec &input_and_target, const Vec &output, Vec &input_and_target_gradient, const Vec &output_gradient, bool accumulate=false)
 this version is provided for compatibility with the parent class.
virtual void bpropUpdate (const Mat &input, const Mat &output, Mat &input_gradient, const Mat &output_gradient, bool accumulate=false)
 SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient)
virtual void bbpropUpdate (const Vec &input, const Vec &target, real cost, Vec &input_gradient, Vec &input_diag_hessian, bool accumulate=false)
 Similar to bpropUpdate, but adapt based also on the estimation of the diagonal of the Hessian matrix, and propagates this back.
virtual void bbpropUpdate (const Vec &input, const Vec &target, real cost)
 Without the input gradient and diag_hessian.
virtual void bbpropUpdate (const Vec &input_and_target, const Vec &output, Vec &input_and_target_gradient, const Vec &output_gradient, Vec &input_and_target_diag_hessian, const Vec &output_diag_hessian, bool accumulate=false)
 this version is provided for compatibility with the parent class.
virtual void forget ()
 reset the parameters to the state they would be BEFORE starting training.
virtual TVec< string > costNames ()
 Indicates the name of the computed costs.
virtual const TVec< string > & getPorts ()
 Overridden so that the default ports being returned are "prediction", "target" and "cost".
virtual const TMat< int > & getPortSizes ()
 Overridden so that the default behavior returns proper widths for the 'prediction', 'target' and 'cost' ports.
virtual void fprop (const TVec< Mat * > &ports_value)
 Overridden to try to use the standard mini-batch fprop when possible.
virtual void bpropAccUpdate (const TVec< Mat * > &ports_value, const TVec< Mat * > &ports_gradient)
 Overridden to try to use the standard mini-batch bprop when possible.
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual CostModuledeepCopy (CopiesMap &copies) const
virtual void build ()
 Post-constructor.
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.

Static Public Member Functions

static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

int target_size
 Size of the target.

Static Public Attributes

static StaticInitializer _static_initializer_

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares the class options.

Protected Attributes

Vec tmp_costs
Vec tmp_input_and_target
Vec tmp_input_and_target_gradient
Vec tmp_input_and_target_diag_hessian
Mat tmp_costs_mat
Mat tmp_input_gradients
Vec store_costs
 Used to store costs temporarily.

Private Types

typedef OnlineLearningModule inherited

Private Member Functions

void build_ ()
 This does the actual building.

Detailed Description

General class representing a cost function module.

It usually takes an input and a target, and outputs one cost. It can also output more costs, in that case the first one will be the objective function to be decreased.

Definition at line 53 of file CostModule.h.


Member Typedef Documentation


Constructor & Destructor Documentation

PLearn::CostModule::CostModule ( )

Default constructor.

Definition at line 55 of file CostModule.cc.

                       :
    target_size(-1)
{
}

Member Function Documentation

string PLearn::CostModule::_classname_ ( ) [static]
OptionList & PLearn::CostModule::_getOptionList_ ( ) [static]
RemoteMethodMap & PLearn::CostModule::_getRemoteMethodMap_ ( ) [static]
bool PLearn::CostModule::_isa_ ( const Object o) [static]
Object * PLearn::CostModule::_new_instance_for_typemap_ ( ) [static]
StaticInitializer CostModule::_static_initializer_ & PLearn::CostModule::_static_initialize_ ( ) [static]
void PLearn::CostModule::bbpropUpdate ( const Vec input,
const Vec target,
real  cost,
Vec input_gradient,
Vec input_diag_hessian,
bool  accumulate = false 
) [virtual]

Similar to bpropUpdate, but adapt based also on the estimation of the diagonal of the Hessian matrix, and propagates this back.

Reimplemented in PLearn::CombiningCostsModule, PLearn::NLLCostModule, PLearn::ProcessInputCostModule, PLearn::SoftmaxNLLCostModule, and PLearn::SquaredErrorCostModule.

Definition at line 322 of file CostModule.cc.

References PLearn::OnlineLearningModule::input_size, PLASSERT_MSG, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), PLearn::TVec< T >::subVec(), target_size, tmp_costs, tmp_input_and_target, tmp_input_and_target_diag_hessian, tmp_input_and_target_gradient, and zero.

Referenced by bbpropUpdate().

{
    // default version, calling the bpropUpdate with inherited prototype
    tmp_input_and_target.resize( input_size + target_size );
    tmp_input_and_target.subVec( 0, input_size ) << input;
    tmp_input_and_target.subVec( input_size, target_size ) << target;
    tmp_input_and_target_gradient.resize( input_size + target_size );
    tmp_input_and_target_diag_hessian.resize( input_size + target_size );
    tmp_costs.resize(1);
    tmp_costs[0] = cost;
    static const Vec one(1,1);
    static const Vec zero(1);

    bbpropUpdate( tmp_input_and_target, tmp_costs,
                  tmp_input_and_target_gradient, one,
                  tmp_input_and_target_diag_hessian, zero,
                  accumulate );

    if( accumulate )
    {
        PLASSERT_MSG( input_gradient.size() == input_size,
                      "Cannot resize input_gradient AND accumulate into it" );
        PLASSERT_MSG( input_diag_hessian.size() == input_size,
                      "Cannot resize input_diag_hessian AND accumulate into it"
                    );

        input_gradient += tmp_input_and_target_gradient.subVec( 0, input_size );
        input_diag_hessian +=
            tmp_input_and_target_diag_hessian.subVec( 0, input_size );
    }
    else
    {
        input_gradient.resize( input_size );
        input_diag_hessian.resize( input_size );
        input_gradient << tmp_input_and_target_gradient.subVec( 0, input_size );
        input_diag_hessian <<
            tmp_input_and_target_diag_hessian.subVec( 0, input_size );
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::CostModule::bbpropUpdate ( const Vec input,
const Vec target,
real  cost 
) [virtual]
void PLearn::CostModule::bbpropUpdate ( const Vec input_and_target,
const Vec output,
Vec input_and_target_gradient,
const Vec output_gradient,
Vec input_and_target_diag_hessian,
const Vec output_diag_hessian,
bool  accumulate = false 
) [virtual]

this version is provided for compatibility with the parent class.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 370 of file CostModule.cc.

References PLearn::OnlineLearningModule::bbpropUpdate().

{
    inherited::bbpropUpdate( input_and_target, output,
                             input_and_target_gradient,
                             output_gradient,
                             input_and_target_diag_hessian,
                             output_diag_hessian,
                             accumulate );
}

Here is the call graph for this function:

void PLearn::CostModule::bpropAccUpdate ( const TVec< Mat * > &  ports_value,
const TVec< Mat * > &  ports_gradient 
) [virtual]

Overridden to try to use the standard mini-batch bprop when possible.

Reimplemented from PLearn::OnlineLearningModule.

Reimplemented in PLearn::CombiningCostsModule, PLearn::CrossEntropyCostModule, PLearn::LayerCostModule, PLearn::NLLCostModule, and PLearn::SoftmaxNLLCostModule.

Definition at line 105 of file CostModule.cc.

References PLearn::OnlineLearningModule::bpropAccUpdate(), bpropUpdate(), PLearn::OnlineLearningModule::checkProp(), classname(), PLearn::TMat< T >::column(), PLearn::fast_exact_is_equal(), i, PLearn::TMat< T >::isEmpty(), j, PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), PLearn::TMat< T >::mod(), PLearn::OnlineLearningModule::name, PLASSERT, PLERROR, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), store_costs, PLearn::TMat< T >::toVec(), and PLearn::TMat< T >::width().

{
    if (ports_gradient.length() == 3) {
        Mat* pred_grad = ports_gradient[0];
        Mat* target_grad = ports_gradient[1];
        Mat* cost_grad = ports_gradient[2];
        if (!pred_grad && !target_grad) {
            // No gradient is being asked.
            checkProp(ports_gradient);
            return;
        }
        if (pred_grad && !target_grad && cost_grad &&
            pred_grad->isEmpty() && !cost_grad->isEmpty())
        {
            // We can probably use the standard mini-batch bpropUpdate.
            // Currently we allow this only in the case where a single cost is
            // computed. This is because the bpropUpdate method in CostModule
            // takes only the value of the first cost as parameter, and we may
            // need the value of all costs.
            PLASSERT( cost_grad->width() == 1 );
#ifdef BOUNDCHECK
            // The gradient on the cost must be one if we want to re-use
            // exactly the existing code.
            for (int i = 0; i < cost_grad->length(); i++) {
                for (int j = 0; j < cost_grad->width(); j++) {
                    PLASSERT( fast_exact_is_equal((*cost_grad)(i, j), 1) );
                }
            }
#endif
            Mat* cost_val = ports_value[2];
            PLASSERT( cost_val );
            Vec costs_vec;
            if (cost_val->mod() == 1) {
                // We can view the cost column matrix as a vector.
                costs_vec = cost_val->toVec();
            } else {
                // We need to make a copy of the cost.
                store_costs.resize(cost_val->length());
                store_costs << cost_val->column(0);
                costs_vec = store_costs;
            }
            Mat* pred_val = ports_value[0];
            Mat* target_val = ports_value[1];
            PLASSERT( pred_val && target_val );
            pred_grad->resize(pred_val->length(), pred_val->width());
            bpropUpdate(*pred_val, *target_val, costs_vec, *pred_grad, true);
            checkProp(ports_gradient);
            return;
        }
        if (pred_grad && pred_grad->isEmpty() && !cost_grad) {
            // We are asked to compute a gradient w.r.t. prediction, but no
            // gradient w.r.t. output cost is being provided.
            PLERROR("In CostModule::bpropAccUpdate - Module '%s' of class '%s'"
                    " cannot compute a gradient w.r.t. its 'prediction' port "
                    "when no gradient w.r.t. its 'cost' port is being provided"
                    " (if within a NetworkModule, ensure incoming connections "
                    "to '%s.prediction' have their 'propagate_gradient' flag "
                    "set to false, or outgoing connections from '%s.cost' have"
                    " their 'propagate_gradient' flag set to true).",
                    OnlineLearningModule::name.c_str(), classname().c_str(),
                    OnlineLearningModule::name.c_str(),
                    OnlineLearningModule::name.c_str());
        }
    }
    // Try to use the parent's default method.
    inherited::bpropAccUpdate(ports_value, ports_gradient);
}

Here is the call graph for this function:

void PLearn::CostModule::bpropUpdate ( const Mat inputs,
const Mat targets,
const Vec costs 
) [virtual]

Reimplemented in PLearn::ClassErrorCostModule.

Definition at line 303 of file CostModule.cc.

References bpropUpdate(), classname(), PLWARNING, and tmp_input_gradients.

{
    PLWARNING("In CostModule::bpropUpdate - Using default (possibly "
        "inefficient) version for class %s", classname().c_str());
    bpropUpdate( inputs, targets, costs, tmp_input_gradients );
}

Here is the call graph for this function:

void PLearn::CostModule::bpropUpdate ( const Vec input,
const Vec target,
real  cost 
) [virtual]

Without the input gradient.

Reimplemented in PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::NLLCostModule, PLearn::SoftmaxNLLCostModule, and PLearn::SquaredErrorCostModule.

Definition at line 298 of file CostModule.cc.

References bpropUpdate(), and PLearn::OnlineLearningModule::tmp_input_gradient.

{
    bpropUpdate( input, target, cost, tmp_input_gradient );
}

Here is the call graph for this function:

void PLearn::CostModule::bpropUpdate ( const Vec input_and_target,
const Vec output,
Vec input_and_target_gradient,
const Vec output_gradient,
bool  accumulate = false 
) [virtual]

this version is provided for compatibility with the parent class.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 311 of file CostModule.cc.

References PLearn::OnlineLearningModule::bpropUpdate().

{
    inherited::bpropUpdate( input_and_target, output,
                            input_and_target_gradient, output_gradient,
                            accumulate );
}

Here is the call graph for this function:

virtual void PLearn::CostModule::bpropUpdate ( const Mat inputs,
const Mat outputs,
Mat input_gradients,
const Mat output_gradients,
bool  accumulate = false 
) [inline, virtual]

SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient)

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 114 of file CostModule.h.

References PLearn::OnlineLearningModule::bpropUpdate().

    {
        inherited::bpropUpdate(input, output, input_gradient, output_gradient,
                accumulate);
    }

Here is the call graph for this function:

void PLearn::CostModule::bpropUpdate ( const Vec input,
const Vec target,
real  cost,
Vec input_gradient,
bool  accumulate = false 
) [virtual]

Adapt based on the cost gradient, and obtain the input gradient.

Reimplemented in PLearn::CombiningCostsModule, PLearn::CrossEntropyCostModule, PLearn::NLLCostModule, PLearn::ProcessInputCostModule, PLearn::SoftmaxNLLCostModule, and PLearn::SquaredErrorCostModule.

Definition at line 270 of file CostModule.cc.

References PLearn::OnlineLearningModule::input_size, PLASSERT_MSG, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), PLearn::TVec< T >::subVec(), target_size, tmp_costs, tmp_input_and_target, and tmp_input_and_target_gradient.

Referenced by bpropAccUpdate(), and bpropUpdate().

{
    // default version, calling the bpropUpdate with inherited prototype
    tmp_input_and_target.resize( input_size + target_size );
    tmp_input_and_target.subVec( 0, input_size ) << input;
    tmp_input_and_target.subVec( input_size, target_size ) << target;
    tmp_input_and_target_gradient.resize( input_size + target_size );
    tmp_costs.resize(1);
    tmp_costs[0] = cost;
    static const Vec one(1,1);

    bpropUpdate( tmp_input_and_target, tmp_costs,
                 tmp_input_and_target_gradient, one );

    if( accumulate )
    {
        PLASSERT_MSG( input_gradient.size() == input_size,
                      "Cannot resize input_gradient AND accumulate into it" );
        input_gradient += tmp_input_and_target_gradient.subVec( 0, input_size );
    }
    else
    {
        input_gradient.resize( input_size );
        input_gradient << tmp_input_and_target_gradient.subVec( 0, input_size );
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

virtual void PLearn::CostModule::bpropUpdate ( const Mat inputs,
const Mat targets,
const Vec costs,
Mat input_gradients,
bool  accumulate = false 
) [inline, virtual]

Adapt based on the mini-batch cost gradient, and obtain the mini-batch input gradient.

Reimplemented in PLearn::CombiningCostsModule, PLearn::CrossEntropyCostModule, PLearn::LayerCostModule, PLearn::NLLCostModule, PLearn::ProcessInputCostModule, PLearn::SoftmaxNLLCostModule, and PLearn::SquaredErrorCostModule.

Definition at line 93 of file CostModule.h.

References classname(), and PLERROR.

    {
        PLERROR("bpropUpdate on mini-batches not implemented in class %s",
                classname().c_str());
    }

Here is the call graph for this function:

void PLearn::CostModule::build ( ) [virtual]

Post-constructor.

The normal implementation should call simply inherited::build(), then this class's build_(). This method should be callable again at later times, after modifying some option fields to change the "architecture" of the object.

Reimplemented from PLearn::OnlineLearningModule.

Reimplemented in PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::CrossEntropyCostModule, PLearn::LayerCostModule, PLearn::NLLCostModule, PLearn::ProcessInputCostModule, PLearn::SoftmaxNLLCostModule, and PLearn::SquaredErrorCostModule.

Definition at line 79 of file CostModule.cc.

References PLearn::OnlineLearningModule::build(), and build_().

Referenced by PLearn::ProcessInputCostModule::build(), PLearn::LayerCostModule::build(), PLearn::CombiningCostsModule::build(), and PLearn::ClassErrorCostModule::build().

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::CostModule::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::OnlineLearningModule.

Reimplemented in PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::CrossEntropyCostModule, PLearn::LayerCostModule, PLearn::NLLCostModule, PLearn::ProcessInputCostModule, PLearn::SoftmaxNLLCostModule, and PLearn::SquaredErrorCostModule.

Definition at line 74 of file CostModule.cc.

Referenced by build().

{
}

Here is the caller graph for this function:

string PLearn::CostModule::classname ( ) const [virtual]
TVec< string > PLearn::CostModule::costNames ( ) [virtual]
void PLearn::CostModule::declareOptions ( OptionList ol) [static, protected]
static const PPath& PLearn::CostModule::declaringFile ( ) [inline, static]
CostModule * PLearn::CostModule::deepCopy ( CopiesMap copies) const [virtual]
void PLearn::CostModule::forget ( ) [virtual]

reset the parameters to the state they would be BEFORE starting training.

Note that this method is necessarily called from build().

Implements PLearn::OnlineLearningModule.

Reimplemented in PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::LayerCostModule, and PLearn::ProcessInputCostModule.

Definition at line 388 of file CostModule.cc.

{
}
void PLearn::CostModule::fprop ( const TVec< Mat * > &  ports_value) [virtual]

Overridden to try to use the standard mini-batch fprop when possible.

Reimplemented from PLearn::OnlineLearningModule.

Reimplemented in PLearn::LayerCostModule, PLearn::NLLCostModule, PLearn::SoftmaxNLLCostModule, and PLearn::SquaredErrorCostModule.

Definition at line 215 of file CostModule.cc.

References PLearn::OnlineLearningModule::fprop(), fprop(), PLearn::TMat< T >::isEmpty(), PLearn::TVec< T >::length(), PLearn::OnlineLearningModule::nPorts(), and PLASSERT.

{
    PLASSERT( ports_value.length() == nPorts() );
    if (ports_value.length() == 3) {
        Mat* prediction = ports_value[0];
        Mat* target = ports_value[1];
        Mat* cost = ports_value[2];
        if (prediction && target && cost &&
            !prediction->isEmpty() && !target->isEmpty() && cost->isEmpty())
        {
            // Standard fprop: (prediction, target) -> cost
            fprop(*prediction, *target, *cost);
            return;
        }
    }
    // Default version does not work: try to re-use the parent's default fprop.
    inherited::fprop(ports_value);
}

Here is the call graph for this function:

void PLearn::CostModule::fprop ( const Vec input,
const Vec target,
real cost 
) const [virtual]

given the input and target, compute the main output (cost)

Reimplemented in PLearn::ClassErrorCostModule, PLearn::CrossEntropyCostModule, and PLearn::ProcessInputCostModule.

Definition at line 190 of file CostModule.cc.

References tmp_costs.

Referenced by fprop().

{
    // Keep only the first cost.
    fprop( input, target, tmp_costs );
    cost = tmp_costs[0];
}

Here is the caller graph for this function:

void PLearn::CostModule::fprop ( const Mat inputs,
const Mat targets,
Vec costs 
) [virtual]

Mini-batch version.

Reimplemented in PLearn::ProcessInputCostModule.

Definition at line 197 of file CostModule.cc.

References PLearn::TMat< T >::column(), fprop(), PLearn::TMat< T >::length(), PLearn::OnlineLearningModule::output_size, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), and tmp_costs_mat.

{
    // Keep only the first cost.
    tmp_costs_mat.resize(inputs.length(), output_size);
    fprop(inputs, targets, tmp_costs_mat);
    costs.resize(tmp_costs_mat.length());
    costs << tmp_costs_mat.column(0);
}

Here is the call graph for this function:

void PLearn::CostModule::fprop ( const Vec input_and_target,
Vec output 
) const [virtual]

this version is provided for compatibility with the parent class

for compatibility with OnlineLearningModule interface

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 207 of file CostModule.cc.

References fprop(), PLearn::OnlineLearningModule::input_size, PLASSERT, PLearn::TVec< T >::size(), PLearn::TVec< T >::subVec(), and target_size.

{
    PLASSERT( input_and_target.size() == input_size + target_size );
    fprop( input_and_target.subVec( 0, input_size ),
           input_and_target.subVec( input_size, target_size ),
           output );
}

Here is the call graph for this function:

void PLearn::CostModule::fprop ( const Vec input,
const Vec target,
Vec cost 
) const [virtual]

this version allows for several costs

Reimplemented in PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::CrossEntropyCostModule, PLearn::NLLCostModule, PLearn::ProcessInputCostModule, PLearn::SoftmaxNLLCostModule, and PLearn::SquaredErrorCostModule.

Definition at line 177 of file CostModule.cc.

References PLERROR.

{
    PLERROR("CostModule::fprop(const Vec& input, const Vec& target, Vec& cost)"
            "\n"
            "is not implemented. You have to implement it in your class.\n");
}
void PLearn::CostModule::fprop ( const Mat inputs,
const Mat targets,
Mat costs 
) const [virtual]

Mini-batch version with several costs..

Reimplemented in PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::CrossEntropyCostModule, PLearn::LayerCostModule, PLearn::NLLCostModule, PLearn::ProcessInputCostModule, and PLearn::SoftmaxNLLCostModule.

Definition at line 184 of file CostModule.cc.

References classname(), and PLERROR.

{
    PLERROR("In CostModule::fprop - Mini-batch version not implemented for "
            "class %s", classname().c_str());
}

Here is the call graph for this function:

OptionList & PLearn::CostModule::getOptionList ( ) const [virtual]
OptionMap & PLearn::CostModule::getOptionMap ( ) const [virtual]
const TVec< string > & PLearn::CostModule::getPorts ( ) [virtual]

Overridden so that the default ports being returned are "prediction", "target" and "cost".

Reimplemented from PLearn::OnlineLearningModule.

Reimplemented in PLearn::LayerCostModule.

Definition at line 237 of file CostModule.cc.

References PLearn::TVec< T >::append(), and PLearn::TVec< T >::isEmpty().

Referenced by getPortSizes().

                                         {
    static TVec<string> default_ports;
    if (default_ports.isEmpty()) {
        default_ports.append("prediction");
        default_ports.append("target");
        default_ports.append("cost");
    }
    return default_ports;
}

Here is the call graph for this function:

Here is the caller graph for this function:

const TMat< int > & PLearn::CostModule::getPortSizes ( ) [virtual]

Overridden so that the default behavior returns proper widths for the 'prediction', 'target' and 'cost' ports.

Reimplemented from PLearn::OnlineLearningModule.

Reimplemented in PLearn::LayerCostModule.

Definition at line 250 of file CostModule.cc.

References PLearn::TMat< T >::fill(), getPorts(), PLearn::OnlineLearningModule::input_size, PLearn::TMat< T >::length(), PLearn::OnlineLearningModule::nPorts(), PLearn::OnlineLearningModule::output_size, PLASSERT, PLearn::OnlineLearningModule::port_sizes, PLearn::TMat< T >::resize(), and target_size.

                                          {
    int n_ports = nPorts();
    if (port_sizes.length() != n_ports) {
        port_sizes.resize(n_ports, 2);
        port_sizes.fill(-1);
        if (n_ports >= 3) {
            PLASSERT( getPorts()[0] == "prediction" &&
                      getPorts()[1] == "target"     &&
                      getPorts()[2] == "cost" );
            port_sizes(0, 1) = input_size;
            port_sizes(1, 1) = target_size;
            port_sizes(2, 1) = output_size;
        }
    }
    return port_sizes;
}

Here is the call graph for this function:

RemoteMethodMap & PLearn::CostModule::getRemoteMethodMap ( ) const [virtual]
void PLearn::CostModule::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Member Data Documentation

Used to store costs temporarily.

Definition at line 183 of file CostModule.h.

Referenced by bpropAccUpdate(), and makeDeepCopyFromShallowCopy().

Vec PLearn::CostModule::tmp_costs [mutable, protected]

Definition at line 177 of file CostModule.h.

Referenced by bbpropUpdate(), bpropUpdate(), fprop(), and makeDeepCopyFromShallowCopy().

Definition at line 181 of file CostModule.h.

Referenced by fprop(), and makeDeepCopyFromShallowCopy().

Definition at line 178 of file CostModule.h.

Referenced by bbpropUpdate(), bpropUpdate(), and makeDeepCopyFromShallowCopy().

Definition at line 180 of file CostModule.h.

Referenced by bbpropUpdate(), and makeDeepCopyFromShallowCopy().

Definition at line 179 of file CostModule.h.

Referenced by bbpropUpdate(), bpropUpdate(), and makeDeepCopyFromShallowCopy().

Definition at line 182 of file CostModule.h.

Referenced by bpropUpdate(), and makeDeepCopyFromShallowCopy().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines