PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Static Protected Member Functions | Protected Attributes | Private Types | Private Member Functions
PLearn::OnlineLearningModule Class Reference

Learn to map inputs to outputs, online, using caller-provided gradients. More...

#include <OnlineLearningModule.h>

Inheritance diagram for PLearn::OnlineLearningModule:
Inheritance graph
[legend]
Collaboration diagram for PLearn::OnlineLearningModule:
Collaboration graph
[legend]

List of all members.

Public Member Functions

 OnlineLearningModule (const string &the_name="", bool call_build_=false)
 Default constructor.
virtual void fprop (const Vec &input, Vec &output) const
 given the input, compute the output (possibly resize it appropriately) SOON TO BE DEPRECATED, USE fprop(const TVec<Mat*>& ports_value)
virtual void fprop (const Mat &inputs, Mat &outputs)
 Mini-batch fprop.
virtual void fprop (const TVec< Mat * > &ports_value)
 Perform a fprop step.
virtual map< string, MatnamedFprop (map< string, Mat > &inputs, TVec< string > wanted_outputs)
virtual map< string, MatnamedBpropAccUpdate (map< string, Mat > &values, map< string, Mat > &gradients, TVec< string > additional_input_gradients)
virtual void bpropUpdate (const Vec &input, const Vec &output, const Vec &output_gradient)
 SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient) Adapt based on the output gradient: this method should only be called just after a corresponding fprop; it should be called with the same arguments as fprop for the first two arguments (and output should not have been modified since then).
virtual void bpropUpdate (const Mat &inputs, const Mat &outputs, const Mat &output_gradients)
 SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient) Batch version.
virtual void bpropAccUpdate (const TVec< Mat * > &ports_value, const TVec< Mat * > &ports_gradient)
 Perform a back propagation step (also updating parameters according to the provided gradient).
void bpropUpdate (const TVec< Mat * > &ports_value, const TVec< Mat * > &ports_gradient)
 Same as 'bpropAccUpdate', except that gradients are not accumulated.
virtual void bpropUpdate (const Vec &input, const Vec &output, Vec &input_gradient, const Vec &output_gradient, bool accumulate=false)
 SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient) this version allows to obtain the input gradient as well N.B.
virtual void bpropUpdate (const Mat &inputs, const Mat &outputs, Mat &input_gradients, const Mat &output_gradients, bool accumulate=false)
 SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient)
virtual void bbpropUpdate (const Vec &input, const Vec &output, const Vec &output_gradient, const Vec &output_diag_hessian)
 Similar to bpropUpdate, but adapt based also on the estimation of the diagonal of the Hessian matrix, and propagates this back.
virtual void bbpropUpdate (const Vec &input, const Vec &output, Vec &input_gradient, const Vec &output_gradient, Vec &input_diag_hessian, const Vec &output_diag_hessian, bool accumulate=false)
 this version allows to obtain the input gradient and diag_hessian The flag indicates whether the input_gradient and input_diag_hessian gets accumulated into or set with the computed derivatives.
virtual void forget ()=0
 reset the parameters to the state they would be BEFORE starting training.
virtual void finalize ()
 optionally perform some processing after training, or after a series of fprop/bpropUpdate calls to prepare the model for truly out-of-sample operation
virtual bool bpropDoesNothing ()
virtual void setLearningRate (real dynamic_learning_rate)
virtual const TVec< string > & getPorts ()
 Return the list of ports in the module.
virtual const TMat< int > & getPortSizes ()
 Return the size of all ports, in the form of a two-column matrix, where each row represents a port, and the two numbers on a row are respectively its length and its width (with -1 representing an undefined or variable value).
virtual int getPortIndex (const string &port)
 Return the index (as in the list of ports returned by getPorts()) of a given port.
int getPortWidth (const string &port)
 Return the width of a specific port.
int getPortLength (const string &port)
 Return the length of a specific port.
int nPorts ()
 Return the number of ports in the module.
string getPortName (int i)
 Return name of the i-th port.
void checkProp (const TVec< Mat * > &ports_data)
 This method may be called at the end of the 'fprop' or 'bpropAccUpdate' methods (respectively with 'ports_value' or 'ports_gradient' as argument) in order to ensure all required ports have been properly computed (otherwise, an error is thrown).
virtual OnlineLearningModuledeepCopy (CopiesMap &copies) const
virtual void build ()
 Post-constructor.
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.

Static Public Member Functions

static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

int input_size
 input size
int output_size
 output size
string name
bool estimate_simpler_diag_hessian
 compute simpler estimation of diagonal of the input Hessian matrix, using only the first (positive) part in: d²C/dx² ~= d²C/dy² (dy/dx)² [+ dC/dy d²y/dx²]
PPath expdir
 Path of the directory associated with this module, in which it should save any file it wishes to create.
PP< PRandomrandom_gen
 optional random generator, possibly shared among several modules
bool use_fast_approximations
 use tables to approximate nonlinearities such as sigmoid, tanh, and softplus
int verbosity

Static Public Attributes

static bool during_training = false
static StaticInitializer _static_initializer_

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares the class options.
static void declareMethods (RemoteMethodMap &rmm)
 Declare the methods that are remote-callable.

Protected Attributes

TMat< intport_sizes
 Used to store the size of each port (may be used in sub-classes).
Vec tmp_input_gradient
Mat tmpm_input_gradient
Vec tmp_input_diag_hessian

Private Types

typedef Object inherited

Private Member Functions

void build_ ()
 This does the actual building.

Detailed Description

Learn to map inputs to outputs, online, using caller-provided gradients.

This pure virtual class (i.e. an interface) can basically do two things: * map an input to an output * modify itself when told in what direction the output should have changed (i.e. output gradient), while optionally giving back the information about how the input should also have changed (i.e. input gradient)

Definition at line 63 of file OnlineLearningModule.h.


Member Typedef Documentation

Reimplemented from PLearn::Object.

Reimplemented in PLearn::ArgmaxModule, PLearn::BackConvolution2DModule, PLearn::BinarizeModule, PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::CrossEntropyCostModule, PLearn::NLLErrModule, PLearn::RBMBinomialLayer, PLearn::RBMConv2DLLParameters, PLearn::RBMGaussianLayer, PLearn::RBMGenericParameters, PLearn::RBMJointGenericParameters, PLearn::RBMJointLLParameters, PLearn::RBMLayer, PLearn::RBMLLParameters, PLearn::RBMLQParameters, PLearn::RBMMixedLayer, PLearn::RBMMultinomialLayer, PLearn::RBMParameters, PLearn::RBMQLParameters, PLearn::RBMTruncExpLayer, PLearn::SquaredErrModule, PLearn::StackedModulesModule, PLearn::UndirectedSoftmaxModule, PLearn::KLp0p1RBMModule, PLearn::TreeDBNModule, PLearn::ForwardModule, PLearn::GradNNetLayerModule, PLearn::IdentityModule, PLearn::LayerCostModule, PLearn::LinearCombinationModule, PLearn::LinearFilterModule, PLearn::LogaddOnBagsModule, PLearn::MatrixModule, PLearn::MaxSubsampling2DModule, PLearn::ModuleStackModule, PLearn::NetworkModule, PLearn::NLLCostModule, PLearn::NullModule, PLearn::OnBagsModule, PLearn::ProcessInputCostModule, PLearn::RBMBinomialLayer, PLearn::RBMClassificationModule, PLearn::RBMConnection, PLearn::RBMConv2DConnection, PLearn::RBMDiagonalMatrixConnection, PLearn::RBMGaussianLayer, PLearn::RBMLateralBinomialLayer, PLearn::RBMLayer, PLearn::RBMLocalMultinomialLayer, PLearn::RBMMatrixConnection, PLearn::RBMMatrixConnectionNatGrad, PLearn::RBMMatrixTransposeConnection, PLearn::RBMMixedConnection, PLearn::RBMMixedLayer, PLearn::RBMModule, PLearn::RBMMultinomialLayer, PLearn::RBMMultitaskClassificationModule, PLearn::RBMRateLayer, PLearn::RBMSparse1DMatrixConnection, PLearn::RBMTruncExpLayer, PLearn::RBMWoodsLayer, PLearn::ScaleGradientModule, PLearn::ShuntingNNetLayerModule, PLearn::SoftmaxModule, PLearn::SoftmaxNLLCostModule, PLearn::SplitModule, PLearn::SquaredErrorCostModule, PLearn::Subsampling2DModule, PLearn::Supersampling2DModule, PLearn::TanhModule, PLearn::VBoundDBN2, PLearn::NnlmOutputLayer, and PLearn::NnlmWordRepresentationLayer.

Definition at line 65 of file OnlineLearningModule.h.


Constructor & Destructor Documentation

PLearn::OnlineLearningModule::OnlineLearningModule ( const string &  the_name = "",
bool  call_build_ = false 
)

Default constructor.

For safety, an error is raised if 'the_name' is empty and 'call_build_' is true, since the default value of 'name' should be the class name, and it is not available in the constructor.

Definition at line 70 of file OnlineLearningModule.cc.

References build_(), and PLERROR.

                                                            :
    inherited(call_build_),
    input_size(-1),
    output_size(-1),
    name(the_name),
    estimate_simpler_diag_hessian(false),
    use_fast_approximations(true),
    verbosity(1)
{
    if (call_build_) {
        if (the_name.empty())
            PLERROR("In OnlineLearningModule::OnlineLearningModule - You "
                    "cannot create a new OnlineLearningModule with an empty "
                    "name and call build within the constructor itself");
        build_();
    }
}

Here is the call graph for this function:


Member Function Documentation

string PLearn::OnlineLearningModule::_classname_ ( ) [static]

Reimplemented from PLearn::Object.

Reimplemented in PLearn::ArgmaxModule, PLearn::BackConvolution2DModule, PLearn::BinarizeModule, PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::CrossEntropyCostModule, PLearn::NLLErrModule, PLearn::RBMBinomialLayer, PLearn::RBMConv2DLLParameters, PLearn::RBMGaussianLayer, PLearn::RBMGenericParameters, PLearn::RBMJointGenericParameters, PLearn::RBMJointLLParameters, PLearn::RBMLayer, PLearn::RBMLLParameters, PLearn::RBMLQParameters, PLearn::RBMMixedLayer, PLearn::RBMMultinomialLayer, PLearn::RBMParameters, PLearn::RBMQLParameters, PLearn::RBMTruncExpLayer, PLearn::SquaredErrModule, PLearn::StackedModulesModule, PLearn::UndirectedSoftmaxModule, PLearn::KLp0p1RBMModule, PLearn::TreeDBNModule, PLearn::ForwardModule, PLearn::GradNNetLayerModule, PLearn::IdentityModule, PLearn::LayerCostModule, PLearn::LinearCombinationModule, PLearn::LinearFilterModule, PLearn::LogaddOnBagsModule, PLearn::MatrixModule, PLearn::MaxSubsampling2DModule, PLearn::ModuleStackModule, PLearn::NetworkModule, PLearn::NLLCostModule, PLearn::NullModule, PLearn::OnBagsModule, PLearn::ProcessInputCostModule, PLearn::RBMBinomialLayer, PLearn::RBMClassificationModule, PLearn::RBMConnection, PLearn::RBMConv2DConnection, PLearn::RBMDiagonalMatrixConnection, PLearn::RBMGaussianLayer, PLearn::RBMLateralBinomialLayer, PLearn::RBMLayer, PLearn::RBMLocalMultinomialLayer, PLearn::RBMMatrixConnection, PLearn::RBMMatrixConnectionNatGrad, PLearn::RBMMatrixTransposeConnection, PLearn::RBMMixedConnection, PLearn::RBMMixedLayer, PLearn::RBMModule, PLearn::RBMMultinomialLayer, PLearn::RBMMultitaskClassificationModule, PLearn::RBMRateLayer, PLearn::RBMSparse1DMatrixConnection, PLearn::RBMTruncExpLayer, PLearn::RBMWoodsLayer, PLearn::ScaleGradientModule, PLearn::ShuntingNNetLayerModule, PLearn::SoftmaxModule, PLearn::SoftmaxNLLCostModule, PLearn::SplitModule, PLearn::SquaredErrorCostModule, PLearn::Subsampling2DModule, PLearn::Supersampling2DModule, PLearn::TanhModule, PLearn::VBoundDBN2, PLearn::NnlmOutputLayer, and PLearn::NnlmWordRepresentationLayer.

Definition at line 63 of file OnlineLearningModule.cc.

OptionList & PLearn::OnlineLearningModule::_getOptionList_ ( ) [static]

Reimplemented from PLearn::Object.

Reimplemented in PLearn::ArgmaxModule, PLearn::BackConvolution2DModule, PLearn::BinarizeModule, PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::CrossEntropyCostModule, PLearn::NLLErrModule, PLearn::RBMBinomialLayer, PLearn::RBMConv2DLLParameters, PLearn::RBMGaussianLayer, PLearn::RBMGenericParameters, PLearn::RBMJointGenericParameters, PLearn::RBMJointLLParameters, PLearn::RBMLayer, PLearn::RBMLLParameters, PLearn::RBMLQParameters, PLearn::RBMMixedLayer, PLearn::RBMMultinomialLayer, PLearn::RBMParameters, PLearn::RBMQLParameters, PLearn::RBMTruncExpLayer, PLearn::SquaredErrModule, PLearn::StackedModulesModule, PLearn::UndirectedSoftmaxModule, PLearn::KLp0p1RBMModule, PLearn::TreeDBNModule, PLearn::ForwardModule, PLearn::GradNNetLayerModule, PLearn::IdentityModule, PLearn::LayerCostModule, PLearn::LinearCombinationModule, PLearn::LinearFilterModule, PLearn::LogaddOnBagsModule, PLearn::MatrixModule, PLearn::MaxSubsampling2DModule, PLearn::ModuleStackModule, PLearn::NetworkModule, PLearn::NLLCostModule, PLearn::NullModule, PLearn::OnBagsModule, PLearn::ProcessInputCostModule, PLearn::RBMBinomialLayer, PLearn::RBMClassificationModule, PLearn::RBMConnection, PLearn::RBMConv2DConnection, PLearn::RBMDiagonalMatrixConnection, PLearn::RBMGaussianLayer, PLearn::RBMLateralBinomialLayer, PLearn::RBMLayer, PLearn::RBMLocalMultinomialLayer, PLearn::RBMMatrixConnection, PLearn::RBMMatrixConnectionNatGrad, PLearn::RBMMatrixTransposeConnection, PLearn::RBMMixedConnection, PLearn::RBMMixedLayer, PLearn::RBMModule, PLearn::RBMMultinomialLayer, PLearn::RBMMultitaskClassificationModule, PLearn::RBMRateLayer, PLearn::RBMSparse1DMatrixConnection, PLearn::RBMTruncExpLayer, PLearn::RBMWoodsLayer, PLearn::ScaleGradientModule, PLearn::ShuntingNNetLayerModule, PLearn::SoftmaxModule, PLearn::SoftmaxNLLCostModule, PLearn::SplitModule, PLearn::SquaredErrorCostModule, PLearn::Subsampling2DModule, PLearn::Supersampling2DModule, PLearn::TanhModule, PLearn::VBoundDBN2, PLearn::NnlmOutputLayer, and PLearn::NnlmWordRepresentationLayer.

Definition at line 63 of file OnlineLearningModule.cc.

RemoteMethodMap & PLearn::OnlineLearningModule::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::Object.

Reimplemented in PLearn::ArgmaxModule, PLearn::BackConvolution2DModule, PLearn::BinarizeModule, PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::CrossEntropyCostModule, PLearn::NLLErrModule, PLearn::RBMBinomialLayer, PLearn::RBMConv2DLLParameters, PLearn::RBMGaussianLayer, PLearn::RBMGenericParameters, PLearn::RBMJointGenericParameters, PLearn::RBMJointLLParameters, PLearn::RBMLayer, PLearn::RBMLLParameters, PLearn::RBMLQParameters, PLearn::RBMMixedLayer, PLearn::RBMMultinomialLayer, PLearn::RBMParameters, PLearn::RBMQLParameters, PLearn::RBMTruncExpLayer, PLearn::SquaredErrModule, PLearn::StackedModulesModule, PLearn::UndirectedSoftmaxModule, PLearn::KLp0p1RBMModule, PLearn::TreeDBNModule, PLearn::ForwardModule, PLearn::GradNNetLayerModule, PLearn::IdentityModule, PLearn::LayerCostModule, PLearn::LinearCombinationModule, PLearn::LinearFilterModule, PLearn::LogaddOnBagsModule, PLearn::MatrixModule, PLearn::MaxSubsampling2DModule, PLearn::ModuleStackModule, PLearn::NetworkModule, PLearn::NLLCostModule, PLearn::NullModule, PLearn::OnBagsModule, PLearn::ProcessInputCostModule, PLearn::RBMBinomialLayer, PLearn::RBMClassificationModule, PLearn::RBMConnection, PLearn::RBMConv2DConnection, PLearn::RBMDiagonalMatrixConnection, PLearn::RBMGaussianLayer, PLearn::RBMLateralBinomialLayer, PLearn::RBMLayer, PLearn::RBMLocalMultinomialLayer, PLearn::RBMMatrixConnection, PLearn::RBMMatrixConnectionNatGrad, PLearn::RBMMatrixTransposeConnection, PLearn::RBMMixedConnection, PLearn::RBMMixedLayer, PLearn::RBMModule, PLearn::RBMMultinomialLayer, PLearn::RBMMultitaskClassificationModule, PLearn::RBMRateLayer, PLearn::RBMSparse1DMatrixConnection, PLearn::RBMTruncExpLayer, PLearn::RBMWoodsLayer, PLearn::ScaleGradientModule, PLearn::ShuntingNNetLayerModule, PLearn::SoftmaxModule, PLearn::SoftmaxNLLCostModule, PLearn::SplitModule, PLearn::SquaredErrorCostModule, PLearn::Subsampling2DModule, PLearn::Supersampling2DModule, PLearn::TanhModule, PLearn::VBoundDBN2, PLearn::NnlmOutputLayer, and PLearn::NnlmWordRepresentationLayer.

Definition at line 63 of file OnlineLearningModule.cc.

Referenced by PLearn::TreeDBNModule::declareMethods(), PLearn::RBMModule::declareMethods(), and PLearn::RBMConnection::declareMethods().

Here is the caller graph for this function:

bool PLearn::OnlineLearningModule::_isa_ ( const Object o) [static]

Reimplemented from PLearn::Object.

Reimplemented in PLearn::ArgmaxModule, PLearn::BackConvolution2DModule, PLearn::BinarizeModule, PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::CrossEntropyCostModule, PLearn::NLLErrModule, PLearn::RBMBinomialLayer, PLearn::RBMConv2DLLParameters, PLearn::RBMGaussianLayer, PLearn::RBMGenericParameters, PLearn::RBMJointGenericParameters, PLearn::RBMJointLLParameters, PLearn::RBMLayer, PLearn::RBMLLParameters, PLearn::RBMLQParameters, PLearn::RBMMixedLayer, PLearn::RBMMultinomialLayer, PLearn::RBMParameters, PLearn::RBMQLParameters, PLearn::RBMTruncExpLayer, PLearn::SquaredErrModule, PLearn::StackedModulesModule, PLearn::UndirectedSoftmaxModule, PLearn::KLp0p1RBMModule, PLearn::TreeDBNModule, PLearn::ForwardModule, PLearn::GradNNetLayerModule, PLearn::IdentityModule, PLearn::LayerCostModule, PLearn::LinearCombinationModule, PLearn::LinearFilterModule, PLearn::LogaddOnBagsModule, PLearn::MatrixModule, PLearn::MaxSubsampling2DModule, PLearn::ModuleStackModule, PLearn::NetworkModule, PLearn::NLLCostModule, PLearn::NullModule, PLearn::OnBagsModule, PLearn::ProcessInputCostModule, PLearn::RBMBinomialLayer, PLearn::RBMClassificationModule, PLearn::RBMConnection, PLearn::RBMConv2DConnection, PLearn::RBMDiagonalMatrixConnection, PLearn::RBMGaussianLayer, PLearn::RBMLateralBinomialLayer, PLearn::RBMLayer, PLearn::RBMLocalMultinomialLayer, PLearn::RBMMatrixConnection, PLearn::RBMMatrixConnectionNatGrad, PLearn::RBMMatrixTransposeConnection, PLearn::RBMMixedConnection, PLearn::RBMMixedLayer, PLearn::RBMModule, PLearn::RBMMultinomialLayer, PLearn::RBMMultitaskClassificationModule, PLearn::RBMRateLayer, PLearn::RBMSparse1DMatrixConnection, PLearn::RBMTruncExpLayer, PLearn::RBMWoodsLayer, PLearn::ScaleGradientModule, PLearn::ShuntingNNetLayerModule, PLearn::SoftmaxModule, PLearn::SoftmaxNLLCostModule, PLearn::SplitModule, PLearn::SquaredErrorCostModule, PLearn::Subsampling2DModule, PLearn::Supersampling2DModule, PLearn::TanhModule, PLearn::VBoundDBN2, PLearn::NnlmOutputLayer, and PLearn::NnlmWordRepresentationLayer.

Definition at line 63 of file OnlineLearningModule.cc.

StaticInitializer OnlineLearningModule::_static_initializer_ & PLearn::OnlineLearningModule::_static_initialize_ ( ) [static]

Reimplemented from PLearn::Object.

Reimplemented in PLearn::ArgmaxModule, PLearn::BackConvolution2DModule, PLearn::BinarizeModule, PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::CrossEntropyCostModule, PLearn::NLLErrModule, PLearn::RBMBinomialLayer, PLearn::RBMConv2DLLParameters, PLearn::RBMGaussianLayer, PLearn::RBMGenericParameters, PLearn::RBMJointGenericParameters, PLearn::RBMJointLLParameters, PLearn::RBMLayer, PLearn::RBMLLParameters, PLearn::RBMLQParameters, PLearn::RBMMixedLayer, PLearn::RBMMultinomialLayer, PLearn::RBMParameters, PLearn::RBMQLParameters, PLearn::RBMTruncExpLayer, PLearn::SquaredErrModule, PLearn::StackedModulesModule, PLearn::UndirectedSoftmaxModule, PLearn::KLp0p1RBMModule, PLearn::TreeDBNModule, PLearn::ForwardModule, PLearn::GradNNetLayerModule, PLearn::IdentityModule, PLearn::LayerCostModule, PLearn::LinearCombinationModule, PLearn::LinearFilterModule, PLearn::LogaddOnBagsModule, PLearn::MatrixModule, PLearn::MaxSubsampling2DModule, PLearn::ModuleStackModule, PLearn::NetworkModule, PLearn::NLLCostModule, PLearn::NullModule, PLearn::OnBagsModule, PLearn::ProcessInputCostModule, PLearn::RBMBinomialLayer, PLearn::RBMClassificationModule, PLearn::RBMConnection, PLearn::RBMConv2DConnection, PLearn::RBMDiagonalMatrixConnection, PLearn::RBMGaussianLayer, PLearn::RBMLateralBinomialLayer, PLearn::RBMLayer, PLearn::RBMLocalMultinomialLayer, PLearn::RBMMatrixConnection, PLearn::RBMMatrixConnectionNatGrad, PLearn::RBMMatrixTransposeConnection, PLearn::RBMMixedConnection, PLearn::RBMMixedLayer, PLearn::RBMModule, PLearn::RBMMultinomialLayer, PLearn::RBMMultitaskClassificationModule, PLearn::RBMRateLayer, PLearn::RBMSparse1DMatrixConnection, PLearn::RBMTruncExpLayer, PLearn::RBMWoodsLayer, PLearn::ScaleGradientModule, PLearn::ShuntingNNetLayerModule, PLearn::SoftmaxModule, PLearn::SoftmaxNLLCostModule, PLearn::SplitModule, PLearn::SquaredErrorCostModule, PLearn::Subsampling2DModule, PLearn::Supersampling2DModule, PLearn::TanhModule, PLearn::VBoundDBN2, PLearn::NnlmOutputLayer, and PLearn::NnlmWordRepresentationLayer.

Definition at line 63 of file OnlineLearningModule.cc.

void PLearn::OnlineLearningModule::bbpropUpdate ( const Vec input,
const Vec output,
const Vec output_gradient,
const Vec output_diag_hessian 
) [virtual]

Similar to bpropUpdate, but adapt based also on the estimation of the diagonal of the Hessian matrix, and propagates this back.

If these methods are defined, you can use them INSTEAD of bpropUpdate(...) THE DEFAULT IMPLEMENTATION PROVIDED HERE JUST CALLS bbpropUpdate(input, output, input_gradient, output_gradient, in_hess, out_hess) AND IGNORES INPUT HESSIAN AND INPUT GRADIENT

Reimplemented in PLearn::NLLErrModule, PLearn::SquaredErrModule, PLearn::UndirectedSoftmaxModule, PLearn::GradNNetLayerModule, PLearn::LinearFilterModule, PLearn::ModuleStackModule, and PLearn::TanhModule.

Definition at line 223 of file OnlineLearningModule.cc.

References tmp_input_diag_hessian, and tmp_input_gradient.

Referenced by PLearn::CostModule::bbpropUpdate().

{
    bbpropUpdate(input, output, tmp_input_gradient, output_gradient,
                 tmp_input_diag_hessian, output_diag_hessian);
}

Here is the caller graph for this function:

void PLearn::OnlineLearningModule::bbpropUpdate ( const Vec input,
const Vec output,
Vec input_gradient,
const Vec output_gradient,
Vec input_diag_hessian,
const Vec output_diag_hessian,
bool  accumulate = false 
) [virtual]

this version allows to obtain the input gradient and diag_hessian The flag indicates whether the input_gradient and input_diag_hessian gets accumulated into or set with the computed derivatives.

Reimplemented in PLearn::BackConvolution2DModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::ModuleStackModule, PLearn::SoftmaxModule, PLearn::Subsampling2DModule, PLearn::Supersampling2DModule, and PLearn::TanhModule.

Definition at line 231 of file OnlineLearningModule.cc.

References PLERROR.

{
    PLERROR("In OnlineLearningModule.cc: method 'bbpropUpdate' not"
            "implemented.\n"
            "Please implement it in your derived class, or use"
            "'bpropUpdate'.\n");
}
void PLearn::OnlineLearningModule::bpropAccUpdate ( const TVec< Mat * > &  ports_value,
const TVec< Mat * > &  ports_gradient 
) [virtual]

Perform a back propagation step (also updating parameters according to the provided gradient).

The matrices in 'ports_value' must be the same as the ones given in a previous call to 'fprop' (and thus they should in particular contain the result of the fprop computation). However, they are not necessarily the same as the ones given in the LAST call to 'fprop': if there is a need to store an internal module state, this should be done using a specific port to store this state. Each Mat* pointer in the 'ports_gradient' vector can be one of:

  • a full matrix : this is the gradient that is provided to the module, and can be used to compute other ports' gradient.
  • an empty matrix: this is a gradient we want to compute and accumulate into. This matrix must have length 0 and a width equal to the width of the corresponding matrix in the 'ports_value' vector (we can thus accumulate gradients using PLearn's ability to keep intact stored values when resizing a matrix' length).
  • a NULL pointer : this is a gradient that is not available, but does not need to be returned (or even computed). The default version tries to use the standard mini-batch bpropUpdate method, when possible.

Reimplemented in PLearn::ArgmaxModule, PLearn::BinarizeModule, PLearn::CombiningCostsModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::CrossEntropyCostModule, PLearn::KLp0p1RBMModule, PLearn::TreeDBNModule, PLearn::ForwardModule, PLearn::IdentityModule, PLearn::LayerCostModule, PLearn::LinearCombinationModule, PLearn::MatrixModule, PLearn::MaxSubsampling2DModule, PLearn::NetworkModule, PLearn::NLLCostModule, PLearn::NullModule, PLearn::OnBagsModule, PLearn::RBMConv2DConnection, PLearn::RBMMatrixConnection, PLearn::RBMMixedConnection, PLearn::RBMModule, PLearn::RBMSparse1DMatrixConnection, PLearn::SoftmaxNLLCostModule, PLearn::SplitModule, and PLearn::VBoundDBN2.

Definition at line 127 of file OnlineLearningModule.cc.

References bpropUpdate(), checkProp(), PLearn::Object::classname(), PLearn::TMat< T >::isEmpty(), PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), PLASSERT, PLERROR, PLearn::TMat< T >::resize(), tmpm_input_gradient, and PLearn::TMat< T >::width().

Referenced by PLearn::CostModule::bpropAccUpdate(), bpropUpdate(), and namedBpropAccUpdate().

{
    if (ports_gradient.length() == 2) {
        Mat* input_grad = ports_gradient[0];
        Mat* output_grad = ports_gradient[1];
        if (!input_grad && !output_grad) {
            // Nothing to do.
            return;
        }
        if (output_grad && !output_grad->isEmpty() &&
             (!input_grad || input_grad->isEmpty()))
        {
            // We can try to re-use the standard mini-batch bpropUpdate method.
            if (!input_grad) {
                // We are not interested in the input gradient: use a dummy
                // matrix to store it.
                input_grad = &tmpm_input_gradient;
            }
            Mat* input_val = ports_value[0];
            Mat* output_val = ports_value[1];
            PLASSERT( input_val && output_val );
            input_grad->resize(input_val->length(), input_val->width());
            bpropUpdate(*input_val, *output_val, *input_grad, *output_grad,
                        true);
            checkProp(ports_gradient);
            return;
        }
    }
    PLERROR("In OnlineLearningModule::bpropAccUpdate - Port configuration "
            "not implemented for class '%s'", classname().c_str());
}

Here is the call graph for this function:

Here is the caller graph for this function:

virtual bool PLearn::OnlineLearningModule::bpropDoesNothing ( ) [inline, virtual]
void PLearn::OnlineLearningModule::bpropUpdate ( const Vec input,
const Vec output,
const Vec output_gradient 
) [virtual]

SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient) Adapt based on the output gradient: this method should only be called just after a corresponding fprop; it should be called with the same arguments as fprop for the first two arguments (and output should not have been modified since then).

Since sub-classes are supposed to learn ONLINE, the object is 'ready-to-be-used' just after any bpropUpdate. N.B. The DEFAULT IMPLEMENTATION JUST CALLS bpropUpdate(input, output, input_gradient, output_gradient) AND IGNORES INPUT GRADIENT.

Reimplemented in PLearn::NLLErrModule, PLearn::SquaredErrModule, PLearn::UndirectedSoftmaxModule, PLearn::GradNNetLayerModule, PLearn::LinearFilterModule, PLearn::ModuleStackModule, PLearn::ShuntingNNetLayerModule, PLearn::TanhModule, and PLearn::NnlmWordRepresentationLayer.

Definition at line 175 of file OnlineLearningModule.cc.

References tmp_input_gradient.

Referenced by bpropAccUpdate(), PLearn::CostModule::bpropUpdate(), bpropUpdate(), and PLearn::NnlmOnlineLearner::train().

{
    bpropUpdate(input, output, tmp_input_gradient, output_gradient);
}

Here is the caller graph for this function:

void PLearn::OnlineLearningModule::bpropUpdate ( const TVec< Mat * > &  ports_value,
const TVec< Mat * > &  ports_gradient 
)

Same as 'bpropAccUpdate', except that gradients are not accumulated.

This method just calls 'bpropAccUpdate' after properly resizing and clearing the gradient matrices that need to be computed. Contrary to 'bpropAccUpdate', the empty matrices (those we want to compute) need not have the correct width, since we resize them here.

Definition at line 198 of file OnlineLearningModule.cc.

References bpropAccUpdate(), PLearn::TMat< T >::fill(), grad, i, PLearn::TMat< T >::isEmpty(), PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), PLERROR, PLearn::TMat< T >::resize(), and PLearn::TMat< T >::width().

{
    for (int i = 0; i < ports_gradient.length(); i++) {
        Mat* grad = ports_gradient[i];
        if (grad && grad->isEmpty()) {
            // This gradient must be computed (= cleared + accumulated).
            Mat* val = ports_value[i];
            if (!val)
                PLERROR("In OnlineLearningModule::bpropUpdate - Cannot compute"
                        " the gradient of a port whose value is not available,"
                        " since we cannot easily know its size");
            grad->resize(val->length(), val->width());
            grad->fill(0); // Clear the gradient.
            grad->resize(0, grad->width()); // So it is accumulated later.
        }
    }
    bpropAccUpdate(ports_value, ports_gradient);
}

Here is the call graph for this function:

void PLearn::OnlineLearningModule::bpropUpdate ( const Vec input,
const Vec output,
Vec input_gradient,
const Vec output_gradient,
bool  accumulate = false 
) [virtual]

SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient) this version allows to obtain the input gradient as well N.B.

THE DEFAULT IMPLEMENTATION JUST RAISES A PLERROR. The flag indicates whether the input_gradients gets accumulated into or set with the computed derivatives.

Reimplemented in PLearn::BackConvolution2DModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::GradNNetLayerModule, PLearn::LinearFilterModule, PLearn::ModuleStackModule, PLearn::RBMBinomialLayer, PLearn::RBMClassificationModule, PLearn::RBMConv2DConnection, PLearn::RBMDiagonalMatrixConnection, PLearn::RBMGaussianLayer, PLearn::RBMLateralBinomialLayer, PLearn::RBMLayer, PLearn::RBMLocalMultinomialLayer, PLearn::RBMMatrixConnection, PLearn::RBMMatrixTransposeConnection, PLearn::RBMMixedConnection, PLearn::RBMMixedLayer, PLearn::RBMMultinomialLayer, PLearn::RBMMultitaskClassificationModule, PLearn::RBMRateLayer, PLearn::RBMTruncExpLayer, PLearn::RBMWoodsLayer, PLearn::ScaleGradientModule, PLearn::SoftmaxModule, PLearn::Subsampling2DModule, PLearn::Supersampling2DModule, and PLearn::TanhModule.

Definition at line 164 of file OnlineLearningModule.cc.

References PLearn::Object::classname(), and PLERROR.

{
    PLERROR("In OnlineLearningModule.cc: method 'bpropUpdate' not"
            " implemented.\n"
            "Please implement it in your derived class (%s) or do not call"
            " bpropUpdate.", classname().c_str());
}

Here is the call graph for this function:

void PLearn::OnlineLearningModule::bpropUpdate ( const Mat inputs,
const Mat outputs,
Mat input_gradients,
const Mat output_gradients,
bool  accumulate = false 
) [virtual]
void PLearn::OnlineLearningModule::bpropUpdate ( const Mat inputs,
const Mat outputs,
const Mat output_gradients 
) [virtual]

SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient) Batch version.

Definition at line 192 of file OnlineLearningModule.cc.

References bpropUpdate(), and tmpm_input_gradient.

{
    bpropUpdate(inputs, outputs, tmpm_input_gradient, output_gradients);
}

Here is the call graph for this function:

void PLearn::OnlineLearningModule::build ( ) [virtual]

Post-constructor.

The normal implementation should call simply inherited::build(), then this class's build_(). This method should be callable again at later times, after modifying some option fields to change the "architecture" of the object.

Reimplemented from PLearn::Object.

Reimplemented in PLearn::ArgmaxModule, PLearn::BackConvolution2DModule, PLearn::BinarizeModule, PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::CrossEntropyCostModule, PLearn::NLLErrModule, PLearn::RBMBinomialLayer, PLearn::RBMConv2DLLParameters, PLearn::RBMGaussianLayer, PLearn::RBMGenericParameters, PLearn::RBMJointGenericParameters, PLearn::RBMJointLLParameters, PLearn::RBMLayer, PLearn::RBMLLParameters, PLearn::RBMLQParameters, PLearn::RBMMixedLayer, PLearn::RBMMultinomialLayer, PLearn::RBMParameters, PLearn::RBMQLParameters, PLearn::RBMTruncExpLayer, PLearn::SquaredErrModule, PLearn::StackedModulesModule, PLearn::UndirectedSoftmaxModule, PLearn::KLp0p1RBMModule, PLearn::TreeDBNModule, PLearn::ForwardModule, PLearn::GradNNetLayerModule, PLearn::IdentityModule, PLearn::LayerCostModule, PLearn::LinearCombinationModule, PLearn::LinearFilterModule, PLearn::LogaddOnBagsModule, PLearn::MatrixModule, PLearn::MaxSubsampling2DModule, PLearn::ModuleStackModule, PLearn::NetworkModule, PLearn::NLLCostModule, PLearn::NullModule, PLearn::OnBagsModule, PLearn::ProcessInputCostModule, PLearn::RBMBinomialLayer, PLearn::RBMClassificationModule, PLearn::RBMConnection, PLearn::RBMConv2DConnection, PLearn::RBMDiagonalMatrixConnection, PLearn::RBMGaussianLayer, PLearn::RBMLateralBinomialLayer, PLearn::RBMLayer, PLearn::RBMLocalMultinomialLayer, PLearn::RBMMatrixConnection, PLearn::RBMMatrixConnectionNatGrad, PLearn::RBMMatrixTransposeConnection, PLearn::RBMMixedConnection, PLearn::RBMMixedLayer, PLearn::RBMModule, PLearn::RBMMultinomialLayer, PLearn::RBMMultitaskClassificationModule, PLearn::RBMRateLayer, PLearn::RBMSparse1DMatrixConnection, PLearn::RBMTruncExpLayer, PLearn::RBMWoodsLayer, PLearn::ScaleGradientModule, PLearn::ShuntingNNetLayerModule, PLearn::SoftmaxModule, PLearn::SoftmaxNLLCostModule, PLearn::SplitModule, PLearn::SquaredErrorCostModule, PLearn::Subsampling2DModule, PLearn::Supersampling2DModule, PLearn::TanhModule, PLearn::VBoundDBN2, PLearn::NnlmOutputLayer, and PLearn::NnlmWordRepresentationLayer.

Definition at line 259 of file OnlineLearningModule.cc.

References PLearn::Object::build(), and build_().

Referenced by PLearn::UndirectedSoftmaxModule::build(), PLearn::TreeDBNModule::build(), PLearn::TanhModule::build(), PLearn::Supersampling2DModule::build(), PLearn::Subsampling2DModule::build(), PLearn::StackedModulesModule::build(), PLearn::SquaredErrModule::build(), PLearn::SplitModule::build(), PLearn::ShuntingNNetLayerModule::build(), PLearn::ScaleGradientModule::build(), PLearn::RBMParameters::build(), PLearn::RBMMultitaskClassificationModule::build(), PLearn::RBMModule::build(), PLearn::RBMConnection::build(), PLearn::OnBagsModule::build(), PLearn::NullModule::build(), PLearn::NnlmWordRepresentationLayer::build(), PLearn::NnlmOutputLayer::build(), PLearn::NLLErrModule::build(), PLearn::NetworkModule::build(), PLearn::ModuleStackModule::build(), PLearn::MaxSubsampling2DModule::build(), PLearn::MatrixModule::build(), PLearn::LinearFilterModule::build(), PLearn::LinearCombinationModule::build(), PLearn::KLp0p1RBMModule::build(), PLearn::GradNNetLayerModule::build(), PLearn::ForwardModule::build(), PLearn::CostModule::build(), PLearn::Convolution2DModule::build(), PLearn::BinarizeModule::build(), and PLearn::BackConvolution2DModule::build().

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::OnlineLearningModule::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::Object.

Reimplemented in PLearn::ArgmaxModule, PLearn::BackConvolution2DModule, PLearn::BinarizeModule, PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::CrossEntropyCostModule, PLearn::NLLErrModule, PLearn::RBMBinomialLayer, PLearn::RBMConv2DLLParameters, PLearn::RBMGaussianLayer, PLearn::RBMGenericParameters, PLearn::RBMJointGenericParameters, PLearn::RBMJointLLParameters, PLearn::RBMLayer, PLearn::RBMLLParameters, PLearn::RBMLQParameters, PLearn::RBMMixedLayer, PLearn::RBMMultinomialLayer, PLearn::RBMParameters, PLearn::RBMQLParameters, PLearn::RBMTruncExpLayer, PLearn::SquaredErrModule, PLearn::StackedModulesModule, PLearn::UndirectedSoftmaxModule, PLearn::KLp0p1RBMModule, PLearn::TreeDBNModule, PLearn::ForwardModule, PLearn::GradNNetLayerModule, PLearn::IdentityModule, PLearn::LayerCostModule, PLearn::LinearCombinationModule, PLearn::LinearFilterModule, PLearn::LogaddOnBagsModule, PLearn::MatrixModule, PLearn::MaxSubsampling2DModule, PLearn::ModuleStackModule, PLearn::NetworkModule, PLearn::NLLCostModule, PLearn::NullModule, PLearn::OnBagsModule, PLearn::ProcessInputCostModule, PLearn::RBMBinomialLayer, PLearn::RBMClassificationModule, PLearn::RBMConnection, PLearn::RBMConv2DConnection, PLearn::RBMDiagonalMatrixConnection, PLearn::RBMGaussianLayer, PLearn::RBMLateralBinomialLayer, PLearn::RBMLayer, PLearn::RBMLocalMultinomialLayer, PLearn::RBMMatrixConnection, PLearn::RBMMatrixConnectionNatGrad, PLearn::RBMMatrixTransposeConnection, PLearn::RBMMixedConnection, PLearn::RBMMixedLayer, PLearn::RBMModule, PLearn::RBMMultinomialLayer, PLearn::RBMMultitaskClassificationModule, PLearn::RBMRateLayer, PLearn::RBMSparse1DMatrixConnection, PLearn::RBMTruncExpLayer, PLearn::RBMWoodsLayer, PLearn::ScaleGradientModule, PLearn::ShuntingNNetLayerModule, PLearn::SoftmaxModule, PLearn::SoftmaxNLLCostModule, PLearn::SplitModule, PLearn::SquaredErrorCostModule, PLearn::Subsampling2DModule, PLearn::Supersampling2DModule, PLearn::TanhModule, PLearn::VBoundDBN2, PLearn::NnlmOutputLayer, and PLearn::NnlmWordRepresentationLayer.

Definition at line 451 of file OnlineLearningModule.cc.

References PLearn::Object::classname(), and name.

Referenced by build(), and OnlineLearningModule().

{
    if (name.empty())
        name = classname();
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::OnlineLearningModule::checkProp ( const TVec< Mat * > &  ports_data)

This method may be called at the end of the 'fprop' or 'bpropAccUpdate' methods (respectively with 'ports_value' or 'ports_gradient' as argument) in order to ensure all required ports have been properly computed (otherwise, an error is thrown).

Definition at line 460 of file OnlineLearningModule.cc.

References PLearn::Object::classname(), getPortName(), i, PLearn::TVec< T >::length(), name, and PLERROR.

Referenced by PLearn::TreeDBNModule::bpropAccUpdate(), PLearn::RBMModule::bpropAccUpdate(), bpropAccUpdate(), PLearn::OnBagsModule::bpropAccUpdate(), PLearn::LinearCombinationModule::bpropAccUpdate(), PLearn::LayerCostModule::bpropAccUpdate(), PLearn::KLp0p1RBMModule::bpropAccUpdate(), PLearn::CostModule::bpropAccUpdate(), PLearn::RBMModule::fprop(), fprop(), PLearn::OnBagsModule::fprop(), PLearn::LinearCombinationModule::fprop(), and PLearn::KLp0p1RBMModule::fprop().

{
#ifdef BOUNDCHECK
    for (int i = 0; i < ports_data.length(); i++) {
        if (ports_data[i] && ports_data[i]->isEmpty())
            PLERROR("In OnlineLearningModule::checkProp - Data for port '%s' "
                    "of module '%s' (of class '%s') was not properly computed "
                    "(this may have happened at the end of a fprop or a "
                    "bpropAccUpdate)", getPortName(i).c_str(), name.c_str(),
                    classname().c_str());
    }
#endif
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::OnlineLearningModule::declareMethods ( RemoteMethodMap rmm) [static, protected]

Declare the methods that are remote-callable.

Reimplemented from PLearn::Object.

Reimplemented in PLearn::TreeDBNModule, PLearn::RBMConnection, PLearn::RBMLayer, PLearn::RBMModule, and PLearn::RBMSparse1DMatrixConnection.

Definition at line 336 of file OnlineLearningModule.cc.

References PLearn::Object::_getRemoteMethodMap_(), PLearn::declareMethod(), forget(), getPorts(), PLearn::RemoteMethodMap::inherited(), namedBpropAccUpdate(), namedFprop(), and setLearningRate().

{
    // Insert a backpointer to remote methods; note that this
    // different than for declareOptions()
    rmm.inherited(inherited::_getRemoteMethodMap_());

    declareMethod(
        rmm, "getPorts", &OnlineLearningModule::getPorts,
        (BodyDoc("Return the list of port names of the module\n"),
         RetDoc ("The list of port names")));

    declareMethod(
        rmm, "forget", &OnlineLearningModule::forget,
        (BodyDoc("Reset the parameters to the state they would be before starting training.\n"
                 "This may involve randomization using the random generator.\n")));

    declareMethod(
        rmm, "namedFprop", &OnlineLearningModule::namedFprop,
        (BodyDoc("Perform the fprop computation on an OnlineLearningModule, which takes matrices\n"
                  "in user-selected input ports and computes outputs in user-selected output-ports.\n"
                  "The function actually computed by the module depends on the selected ports and\n"
                  "on its internal state (options and parameters)\n"),
         ArgDoc ("inputs", "A dictionary of input matrices (one for each input port), indexed by the port names,\n"),
         ArgDoc ("wanted_outputs", "A list of wanted output port names,\n"),
         RetDoc ("A dictionary of the input and output matrices (indexed by their name).\n")));

    declareMethod(
        rmm, "namedBpropAccUpdate", &OnlineLearningModule::namedBpropAccUpdate,
        (BodyDoc("Perform the BpropAccUpdate computation on an OnlineLearningModule, which\n"
                 "takes matrices in user-selected input ports, output ports, and output\n"
                 "gradient ports and computes gradients for user-selected input ports.\n"
                 "The function actually computed by the module depends on the selected ports\n"
                 "and on its internal state (options and parameters)\n"),
         ArgDoc ("values", "A dictionary of named input and output matrices that was\n"
                 "returned by namedFprop (one entry for each input and output port used).\n"),
         ArgDoc ("gradients", "A dictionary of named output (and possibly input) gradient\n"
                 "matrices (the name indexing each matrix is the name of corresponding port).\n"
                 "Output gradient matrices should be full, whereas input gradient matrices\n"
                 "into which to accumulate should have lenght 0 and correct width.\n"),
         ArgDoc ("additional_input_gradients", "A list of wanted input port names,\n"
                 "for which the gradient is desired (no accumulation)\n"),
         RetDoc ("A dictionary of all the input and output gradient matrices (indexed\n"
                 "by their port name), including those in the gradients argument\n"
                 "and those named in the additional_input_gradiaents argument.\n")));

    declareMethod(
        rmm, "setLearningRate", &OnlineLearningModule::setLearningRate,
        (BodyDoc("Allows to change the learning rate or equivalent parameter"),
         ArgDoc ("dynamic_learning_rate",
                 "The value we want for the learning rate")
        ));

}

Here is the call graph for this function:

void PLearn::OnlineLearningModule::declareOptions ( OptionList ol) [static, protected]

Declares the class options.

Reimplemented from PLearn::Object.

Reimplemented in PLearn::ArgmaxModule, PLearn::BackConvolution2DModule, PLearn::BinarizeModule, PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::CrossEntropyCostModule, PLearn::NLLErrModule, PLearn::RBMBinomialLayer, PLearn::RBMConv2DLLParameters, PLearn::RBMGaussianLayer, PLearn::RBMGenericParameters, PLearn::RBMJointGenericParameters, PLearn::RBMJointLLParameters, PLearn::RBMLayer, PLearn::RBMLLParameters, PLearn::RBMLQParameters, PLearn::RBMMixedLayer, PLearn::RBMMultinomialLayer, PLearn::RBMParameters, PLearn::RBMQLParameters, PLearn::RBMTruncExpLayer, PLearn::SquaredErrModule, PLearn::StackedModulesModule, PLearn::UndirectedSoftmaxModule, PLearn::KLp0p1RBMModule, PLearn::TreeDBNModule, PLearn::ForwardModule, PLearn::GradNNetLayerModule, PLearn::IdentityModule, PLearn::LayerCostModule, PLearn::LinearCombinationModule, PLearn::LinearFilterModule, PLearn::LogaddOnBagsModule, PLearn::MatrixModule, PLearn::MaxSubsampling2DModule, PLearn::ModuleStackModule, PLearn::NetworkModule, PLearn::NLLCostModule, PLearn::NullModule, PLearn::OnBagsModule, PLearn::ProcessInputCostModule, PLearn::RBMBinomialLayer, PLearn::RBMClassificationModule, PLearn::RBMConnection, PLearn::RBMConv2DConnection, PLearn::RBMDiagonalMatrixConnection, PLearn::RBMGaussianLayer, PLearn::RBMLateralBinomialLayer, PLearn::RBMLayer, PLearn::RBMLocalMultinomialLayer, PLearn::RBMMatrixConnection, PLearn::RBMMatrixConnectionNatGrad, PLearn::RBMMatrixTransposeConnection, PLearn::RBMMixedConnection, PLearn::RBMMixedLayer, PLearn::RBMModule, PLearn::RBMMultinomialLayer, PLearn::RBMMultitaskClassificationModule, PLearn::RBMRateLayer, PLearn::RBMSparse1DMatrixConnection, PLearn::RBMTruncExpLayer, PLearn::RBMWoodsLayer, PLearn::ScaleGradientModule, PLearn::ShuntingNNetLayerModule, PLearn::SoftmaxModule, PLearn::SoftmaxNLLCostModule, PLearn::SplitModule, PLearn::SquaredErrorCostModule, PLearn::Subsampling2DModule, PLearn::Supersampling2DModule, PLearn::TanhModule, PLearn::VBoundDBN2, PLearn::NnlmOutputLayer, and PLearn::NnlmWordRepresentationLayer.

Definition at line 281 of file OnlineLearningModule.cc.

References PLearn::OptionBase::advanced_level, PLearn::OptionBase::buildoption, PLearn::declareOption(), PLearn::Object::declareOptions(), estimate_simpler_diag_hessian, expdir, input_size, name, output_size, random_gen, use_fast_approximations, and verbosity.

Referenced by PLearn::UndirectedSoftmaxModule::declareOptions(), PLearn::TreeDBNModule::declareOptions(), PLearn::TanhModule::declareOptions(), PLearn::Supersampling2DModule::declareOptions(), PLearn::Subsampling2DModule::declareOptions(), PLearn::StackedModulesModule::declareOptions(), PLearn::SquaredErrModule::declareOptions(), PLearn::SplitModule::declareOptions(), PLearn::ShuntingNNetLayerModule::declareOptions(), PLearn::ScaleGradientModule::declareOptions(), PLearn::RBMParameters::declareOptions(), PLearn::RBMMultitaskClassificationModule::declareOptions(), PLearn::RBMModule::declareOptions(), PLearn::RBMConnection::declareOptions(), PLearn::OnBagsModule::declareOptions(), PLearn::NullModule::declareOptions(), PLearn::NnlmWordRepresentationLayer::declareOptions(), PLearn::NnlmOutputLayer::declareOptions(), PLearn::NLLErrModule::declareOptions(), PLearn::NetworkModule::declareOptions(), PLearn::ModuleStackModule::declareOptions(), PLearn::MaxSubsampling2DModule::declareOptions(), PLearn::MatrixModule::declareOptions(), PLearn::LinearFilterModule::declareOptions(), PLearn::LinearCombinationModule::declareOptions(), PLearn::KLp0p1RBMModule::declareOptions(), PLearn::GradNNetLayerModule::declareOptions(), PLearn::ForwardModule::declareOptions(), PLearn::CostModule::declareOptions(), PLearn::Convolution2DModule::declareOptions(), PLearn::BinarizeModule::declareOptions(), and PLearn::BackConvolution2DModule::declareOptions().

{
    declareOption(ol, "input_size", &OnlineLearningModule::input_size,
                  OptionBase::buildoption,
                  "Size of the input");

    declareOption(ol, "output_size", &OnlineLearningModule::output_size,
                  OptionBase::buildoption,
                  "Size of the output");

    declareOption(ol, "name", &OnlineLearningModule::name,
                  OptionBase::buildoption,
                  "Name of the module (if not provided, the class name is used).");

    declareOption(ol, "use_fast_approximations", &OnlineLearningModule::use_fast_approximations,
                  OptionBase::buildoption,
                  "Use tables to approximate nonlinearities such as sigmoid, tanh, and softplus\n");

    declareOption(ol, "estimate_simpler_diag_hessian",
                  &OnlineLearningModule::estimate_simpler_diag_hessian,
                  OptionBase::buildoption,
                  "Should we compute a simpler diagonal estimation of the"
                  " input Hessian\n"
                  "matrix, using only the first (positive) term in:\n"
                  "  d²C/dx² ~= d²C/dy² (dy/dx)² [+ dC/dy d²y/dx²]\n");


    declareOption(ol, "expdir", &OnlineLearningModule::expdir,
                  OptionBase::buildoption,
                  "Path of the directory associated with this module,\n"
                  "in which it should save any file it wishes to create. \n"
                  "The directory will be created if it does not already"
                  " exist.\n"
                  "If expdir is the empty string (the default),\n"
                  "then the module should not create *any* file.\n");

    declareOption(ol, "random_gen",
                  &OnlineLearningModule::random_gen,
                  OptionBase::buildoption,
                  "Pointer to an optional random number generator,\n"
                  "e.g. for initializing parameters or any non-deterministic"
                  " operation\n"
                  "required by the module.\n");

    declareOption(ol, "verbosity", &OnlineLearningModule::verbosity,
                  OptionBase::buildoption,
                  "Controls the level of verbosity of the module.",
                  OptionBase::advanced_level);

    inherited::declareOptions(ol);
}

Here is the call graph for this function:

Here is the caller graph for this function:

static const PPath& PLearn::OnlineLearningModule::declaringFile ( ) [inline, static]

Reimplemented from PLearn::Object.

Reimplemented in PLearn::ArgmaxModule, PLearn::BackConvolution2DModule, PLearn::BinarizeModule, PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::CrossEntropyCostModule, PLearn::NLLErrModule, PLearn::RBMBinomialLayer, PLearn::RBMConv2DLLParameters, PLearn::RBMGaussianLayer, PLearn::RBMGenericParameters, PLearn::RBMJointGenericParameters, PLearn::RBMJointLLParameters, PLearn::RBMLayer, PLearn::RBMLLParameters, PLearn::RBMLQParameters, PLearn::RBMMixedLayer, PLearn::RBMMultinomialLayer, PLearn::RBMParameters, PLearn::RBMQLParameters, PLearn::RBMTruncExpLayer, PLearn::SquaredErrModule, PLearn::StackedModulesModule, PLearn::UndirectedSoftmaxModule, PLearn::KLp0p1RBMModule, PLearn::TreeDBNModule, PLearn::ForwardModule, PLearn::GradNNetLayerModule, PLearn::IdentityModule, PLearn::LayerCostModule, PLearn::LinearCombinationModule, PLearn::LinearFilterModule, PLearn::LogaddOnBagsModule, PLearn::MatrixModule, PLearn::MaxSubsampling2DModule, PLearn::ModuleStackModule, PLearn::NetworkModule, PLearn::NLLCostModule, PLearn::NullModule, PLearn::OnBagsModule, PLearn::ProcessInputCostModule, PLearn::RBMBinomialLayer, PLearn::RBMClassificationModule, PLearn::RBMConnection, PLearn::RBMConv2DConnection, PLearn::RBMDiagonalMatrixConnection, PLearn::RBMGaussianLayer, PLearn::RBMLateralBinomialLayer, PLearn::RBMLayer, PLearn::RBMLocalMultinomialLayer, PLearn::RBMMatrixConnection, PLearn::RBMMatrixConnectionNatGrad, PLearn::RBMMatrixTransposeConnection, PLearn::RBMMixedConnection, PLearn::RBMMixedLayer, PLearn::RBMModule, PLearn::RBMMultinomialLayer, PLearn::RBMMultitaskClassificationModule, PLearn::RBMRateLayer, PLearn::RBMSparse1DMatrixConnection, PLearn::RBMTruncExpLayer, PLearn::RBMWoodsLayer, PLearn::ScaleGradientModule, PLearn::ShuntingNNetLayerModule, PLearn::SoftmaxModule, PLearn::SoftmaxNLLCostModule, PLearn::SplitModule, PLearn::SquaredErrorCostModule, PLearn::Subsampling2DModule, PLearn::Supersampling2DModule, PLearn::TanhModule, PLearn::VBoundDBN2, PLearn::NnlmOutputLayer, and PLearn::NnlmWordRepresentationLayer.

Definition at line 292 of file OnlineLearningModule.h.

:
    //#####  Protected Member Functions  ######################################
OnlineLearningModule * PLearn::OnlineLearningModule::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::Object.

Reimplemented in PLearn::ArgmaxModule, PLearn::BackConvolution2DModule, PLearn::BinarizeModule, PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::CrossEntropyCostModule, PLearn::NLLErrModule, PLearn::RBMBinomialLayer, PLearn::RBMConv2DLLParameters, PLearn::RBMGaussianLayer, PLearn::RBMGenericParameters, PLearn::RBMJointGenericParameters, PLearn::RBMJointLLParameters, PLearn::RBMLayer, PLearn::RBMLLParameters, PLearn::RBMLQParameters, PLearn::RBMMixedLayer, PLearn::RBMMultinomialLayer, PLearn::RBMParameters, PLearn::RBMQLParameters, PLearn::RBMTruncExpLayer, PLearn::SquaredErrModule, PLearn::StackedModulesModule, PLearn::UndirectedSoftmaxModule, PLearn::KLp0p1RBMModule, PLearn::TreeDBNModule, PLearn::ForwardModule, PLearn::GradNNetLayerModule, PLearn::IdentityModule, PLearn::LayerCostModule, PLearn::LinearCombinationModule, PLearn::LinearFilterModule, PLearn::LogaddOnBagsModule, PLearn::MatrixModule, PLearn::MaxSubsampling2DModule, PLearn::ModuleStackModule, PLearn::NetworkModule, PLearn::NLLCostModule, PLearn::NullModule, PLearn::OnBagsModule, PLearn::ProcessInputCostModule, PLearn::RBMBinomialLayer, PLearn::RBMClassificationModule, PLearn::RBMConnection, PLearn::RBMConv2DConnection, PLearn::RBMDiagonalMatrixConnection, PLearn::RBMGaussianLayer, PLearn::RBMLateralBinomialLayer, PLearn::RBMLayer, PLearn::RBMLocalMultinomialLayer, PLearn::RBMMatrixConnection, PLearn::RBMMatrixConnectionNatGrad, PLearn::RBMMatrixTransposeConnection, PLearn::RBMMixedConnection, PLearn::RBMMixedLayer, PLearn::RBMModule, PLearn::RBMMultinomialLayer, PLearn::RBMMultitaskClassificationModule, PLearn::RBMRateLayer, PLearn::RBMSparse1DMatrixConnection, PLearn::RBMTruncExpLayer, PLearn::RBMWoodsLayer, PLearn::ScaleGradientModule, PLearn::ShuntingNNetLayerModule, PLearn::SoftmaxModule, PLearn::SoftmaxNLLCostModule, PLearn::SplitModule, PLearn::SquaredErrorCostModule, PLearn::Subsampling2DModule, PLearn::Supersampling2DModule, PLearn::TanhModule, PLearn::VBoundDBN2, PLearn::NnlmOutputLayer, and PLearn::NnlmWordRepresentationLayer.

Definition at line 63 of file OnlineLearningModule.cc.

virtual void PLearn::OnlineLearningModule::finalize ( ) [inline, virtual]

optionally perform some processing after training, or after a series of fprop/bpropUpdate calls to prepare the model for truly out-of-sample operation

Reimplemented in PLearn::CombiningCostsModule, PLearn::ForwardModule, PLearn::ModuleStackModule, and PLearn::ProcessInputCostModule.

Definition at line 239 of file OnlineLearningModule.h.

{}
virtual void PLearn::OnlineLearningModule::forget ( ) [pure virtual]

reset the parameters to the state they would be BEFORE starting training.

Note that this method is necessarily called from build().

Implemented in PLearn::ArgmaxModule, PLearn::BackConvolution2DModule, PLearn::BinarizeModule, PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::NLLErrModule, PLearn::RBMConv2DLLParameters, PLearn::RBMGenericParameters, PLearn::RBMJointGenericParameters, PLearn::RBMJointLLParameters, PLearn::RBMLLParameters, PLearn::RBMLQParameters, PLearn::RBMQLParameters, PLearn::SquaredErrModule, PLearn::StackedModulesModule, PLearn::UndirectedSoftmaxModule, PLearn::KLp0p1RBMModule, PLearn::TreeDBNModule, PLearn::ForwardModule, PLearn::GradNNetLayerModule, PLearn::IdentityModule, PLearn::LayerCostModule, PLearn::LinearCombinationModule, PLearn::LinearFilterModule, PLearn::MatrixModule, PLearn::MaxSubsampling2DModule, PLearn::ModuleStackModule, PLearn::NetworkModule, PLearn::NullModule, PLearn::OnBagsModule, PLearn::ProcessInputCostModule, PLearn::RBMClassificationModule, PLearn::RBMConv2DConnection, PLearn::RBMDiagonalMatrixConnection, PLearn::RBMGaussianLayer, PLearn::RBMLateralBinomialLayer, PLearn::RBMLayer, PLearn::RBMMatrixConnection, PLearn::RBMMatrixTransposeConnection, PLearn::RBMMixedConnection, PLearn::RBMMixedLayer, PLearn::RBMModule, PLearn::RBMMultitaskClassificationModule, PLearn::RBMSparse1DMatrixConnection, PLearn::ScaleGradientModule, PLearn::ShuntingNNetLayerModule, PLearn::SoftmaxModule, PLearn::SplitModule, PLearn::Subsampling2DModule, PLearn::Supersampling2DModule, PLearn::TanhModule, PLearn::VBoundDBN2, PLearn::NnlmOutputLayer, and PLearn::NnlmWordRepresentationLayer.

Referenced by declareMethods().

Here is the caller graph for this function:

void PLearn::OnlineLearningModule::fprop ( const Mat inputs,
Mat outputs 
) [virtual]

Mini-batch fprop.

Default implementation raises an error. SOON TO BE DEPRECATED, USE fprop(const TVec<Mat*>& ports_value)

Reimplemented in PLearn::GradNNetLayerModule, PLearn::IdentityModule, PLearn::LinearFilterModule, PLearn::ModuleStackModule, PLearn::RBMBinomialLayer, PLearn::RBMClassificationModule, PLearn::RBMConnection, PLearn::RBMLateralBinomialLayer, PLearn::RBMLayer, PLearn::RBMMixedLayer, PLearn::RBMWoodsLayer, PLearn::ScaleGradientModule, PLearn::ShuntingNNetLayerModule, PLearn::SoftmaxModule, and PLearn::TanhModule.

Definition at line 92 of file OnlineLearningModule.cc.

References PLearn::Object::classname(), and PLERROR.

{
    PLERROR("In OnlineLearningModule::fprop - The mini-batch version of "
            "'fprop' for class '%s' is not implemented. Implementation is "
            "required out of safety, to ensure a subsequent call to "
            "'bpropUpdate' can use the correctly updated data",
            classname().c_str());
}

Here is the call graph for this function:

void PLearn::OnlineLearningModule::fprop ( const Vec input,
Vec output 
) const [virtual]

given the input, compute the output (possibly resize it appropriately) SOON TO BE DEPRECATED, USE fprop(const TVec<Mat*>& ports_value)

Reimplemented in PLearn::BackConvolution2DModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::NLLErrModule, PLearn::RBMJointGenericParameters, PLearn::RBMJointLLParameters, PLearn::RBMParameters, PLearn::SquaredErrModule, PLearn::StackedModulesModule, PLearn::UndirectedSoftmaxModule, PLearn::KLp0p1RBMModule, PLearn::GradNNetLayerModule, PLearn::IdentityModule, PLearn::LinearFilterModule, PLearn::MatrixModule, PLearn::ModuleStackModule, PLearn::NetworkModule, PLearn::NullModule, PLearn::RBMBinomialLayer, PLearn::RBMClassificationModule, PLearn::RBMConnection, PLearn::RBMGaussianLayer, PLearn::RBMLateralBinomialLayer, PLearn::RBMLayer, PLearn::RBMLocalMultinomialLayer, PLearn::RBMMixedLayer, PLearn::RBMModule, PLearn::RBMMultinomialLayer, PLearn::RBMMultitaskClassificationModule, PLearn::RBMRateLayer, PLearn::RBMTruncExpLayer, PLearn::RBMWoodsLayer, PLearn::ScaleGradientModule, PLearn::ShuntingNNetLayerModule, PLearn::SoftmaxModule, PLearn::Subsampling2DModule, PLearn::Supersampling2DModule, PLearn::TanhModule, PLearn::NnlmOutputLayer, and PLearn::NnlmWordRepresentationLayer.

Definition at line 101 of file OnlineLearningModule.cc.

References PLERROR.

Referenced by fprop(), PLearn::CostModule::fprop(), and namedFprop().

{
    PLERROR("In OnlineLearningModule::fprop - This variant is deprecated, use fprop(ports_value)\n");
}

Here is the caller graph for this function:

void PLearn::OnlineLearningModule::fprop ( const TVec< Mat * > &  ports_value) [virtual]

Perform a fprop step.

Each Mat* pointer in the 'ports_value' vector can be one of:

  • a full matrix: this is data that is provided to the module, and can be used to compute other ports' values
  • an empty matrix: this is what we want to compute
  • a NULL pointer: this is data that is not available, but whose value does not need to be returned (or even computed) The default version will either:
  • call the mini-batch versions of standard fprop if 'ports_value' has size 2, with the first value being provided (and the second being the desired output)
  • crash otherwise

Reimplemented in PLearn::ArgmaxModule, PLearn::BinarizeModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::KLp0p1RBMModule, PLearn::TreeDBNModule, PLearn::ForwardModule, PLearn::LayerCostModule, PLearn::LinearCombinationModule, PLearn::MatrixModule, PLearn::MaxSubsampling2DModule, PLearn::NetworkModule, PLearn::NLLCostModule, PLearn::NullModule, PLearn::OnBagsModule, PLearn::RBMModule, PLearn::SoftmaxNLLCostModule, PLearn::SplitModule, PLearn::SquaredErrorCostModule, and PLearn::VBoundDBN2.

Definition at line 106 of file OnlineLearningModule.cc.

References checkProp(), PLearn::Object::classname(), fprop(), PLearn::TMat< T >::isEmpty(), PLearn::TVec< T >::length(), nPorts(), PLASSERT, and PLERROR.

{
    PLASSERT( ports_value.length() == nPorts() );
    if (ports_value.length() == 2)
    {
        Mat* m1 = ports_value[0];
        Mat* m2 = ports_value[1];
        if (m1 && m2 && !m1->isEmpty() && m2->isEmpty()) {
            // We can re-use previous code for standard mini-batch fprop.
            fprop(*m1, *m2);
            checkProp(ports_value);
            return;
        }
    }
    PLERROR("In OnlineLearningModule::fprop - Port configuration not "
            "implemented for class '%s'", classname().c_str());
}

Here is the call graph for this function:

int PLearn::OnlineLearningModule::getPortIndex ( const string &  port) [virtual]

Return the index (as in the list of ports returned by getPorts()) of a given port.

If 'port' does not exist, -1 is returned.

Reimplemented in PLearn::KLp0p1RBMModule, PLearn::LayerCostModule, PLearn::OnBagsModule, and PLearn::RBMModule.

Definition at line 477 of file OnlineLearningModule.cc.

References PLearn::TVec< T >::find(), and getPorts().

Referenced by PLearn::TreeDBNModule::bpropAccUpdate(), PLearn::TreeDBNModule::fprop(), PLearn::TreeDBNModule::full_fprop(), getPortLength(), getPortWidth(), namedBpropAccUpdate(), and namedFprop().

{
    return getPorts().find(port);
}

Here is the call graph for this function:

Here is the caller graph for this function:

int PLearn::OnlineLearningModule::getPortLength ( const string &  port)

Return the length of a specific port.

Definition at line 521 of file OnlineLearningModule.cc.

References PLearn::Object::classname(), getPortIndex(), getPortSizes(), name, and PLERROR.

{
    int port_index = getPortIndex(port);
    if (port_index < 0)
        PLERROR("In OnlineLearningModule::getPortLength - Port '%s' not known "
                "by module '%s' of class '%s'",
                port.c_str(), name.c_str(), classname().c_str());
    return getPortSizes()(port_index, 0);
}

Here is the call graph for this function:

string PLearn::OnlineLearningModule::getPortName ( int  i)

Return name of the i-th port.

Definition at line 485 of file OnlineLearningModule.cc.

References getPorts(), and i.

Referenced by checkProp().

{
    return getPorts()[i];
}

Here is the call graph for this function:

Here is the caller graph for this function:

const TVec< string > & PLearn::OnlineLearningModule::getPorts ( ) [virtual]

Return the list of ports in the module.

The default implementation returns a pair ("input", "output") to handle the most common case.

Reimplemented in PLearn::ArgmaxModule, PLearn::BinarizeModule, PLearn::CostModule, PLearn::KLp0p1RBMModule, PLearn::TreeDBNModule, PLearn::ForwardModule, PLearn::LayerCostModule, PLearn::LinearCombinationModule, PLearn::MatrixModule, PLearn::MaxSubsampling2DModule, PLearn::NetworkModule, PLearn::NullModule, PLearn::OnBagsModule, PLearn::RBMConnection, PLearn::RBMModule, PLearn::SplitModule, and PLearn::VBoundDBN2.

Definition at line 493 of file OnlineLearningModule.cc.

References PLearn::TVec< T >::append(), and PLearn::TVec< T >::isEmpty().

Referenced by declareMethods(), getPortIndex(), getPortName(), PLearn::BinarizeModule::getPorts(), namedBpropAccUpdate(), namedFprop(), and nPorts().

                                                   {
    static TVec<string> default_ports;
    if (default_ports.isEmpty()) {
        default_ports.append("input");
        default_ports.append("output");
    }
    return default_ports;
}

Here is the call graph for this function:

Here is the caller graph for this function:

const TMat< int > & PLearn::OnlineLearningModule::getPortSizes ( ) [virtual]

Return the size of all ports, in the form of a two-column matrix, where each row represents a port, and the two numbers on a row are respectively its length and its width (with -1 representing an undefined or variable value).

The default value fills this matrix with:

  • in the first column (lengths): -1
  • in the second column (widths):
    • -1 if nPorts() < 2
    • 'input_size' for the first row and 'output_size' for the second row if nPorts() >= 2

Reimplemented in PLearn::BinarizeModule, PLearn::CostModule, PLearn::KLp0p1RBMModule, PLearn::ForwardModule, PLearn::LayerCostModule, PLearn::MatrixModule, PLearn::MaxSubsampling2DModule, PLearn::NetworkModule, PLearn::NullModule, PLearn::OnBagsModule, PLearn::RBMConnection, PLearn::RBMModule, PLearn::SplitModule, and PLearn::VBoundDBN2.

Definition at line 505 of file OnlineLearningModule.cc.

References PLearn::TMat< T >::fill(), input_size, PLearn::TMat< T >::length(), nPorts(), output_size, port_sizes, and PLearn::TMat< T >::resize().

Referenced by getPortLength(), and getPortWidth().

                                                    {
    int n_ports = nPorts();
    if (port_sizes.length() != n_ports) {
        port_sizes.resize(n_ports, 2);
        port_sizes.fill(-1);
        if (n_ports >= 2) {
            port_sizes(0, 1) = input_size;
            port_sizes(1, 1) = output_size;
        }
    }
    return port_sizes;
}

Here is the call graph for this function:

Here is the caller graph for this function:

int PLearn::OnlineLearningModule::getPortWidth ( const string &  port)

Return the width of a specific port.

Definition at line 534 of file OnlineLearningModule.cc.

References getPortIndex(), getPortSizes(), and PLASSERT.

{
    PLASSERT( getPortIndex(port) >= 0 );
    return getPortSizes()(getPortIndex(port), 1);
}

Here is the call graph for this function:

void PLearn::OnlineLearningModule::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Transforms a shallow copy into a deep copy.

Reimplemented from PLearn::Object.

Reimplemented in PLearn::ArgmaxModule, PLearn::BackConvolution2DModule, PLearn::BinarizeModule, PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::CrossEntropyCostModule, PLearn::NLLErrModule, PLearn::RBMBinomialLayer, PLearn::RBMConv2DLLParameters, PLearn::RBMGaussianLayer, PLearn::RBMGenericParameters, PLearn::RBMJointGenericParameters, PLearn::RBMJointLLParameters, PLearn::RBMLayer, PLearn::RBMLLParameters, PLearn::RBMLQParameters, PLearn::RBMMixedLayer, PLearn::RBMMultinomialLayer, PLearn::RBMParameters, PLearn::RBMQLParameters, PLearn::RBMTruncExpLayer, PLearn::SquaredErrModule, PLearn::StackedModulesModule, PLearn::UndirectedSoftmaxModule, PLearn::KLp0p1RBMModule, PLearn::TreeDBNModule, PLearn::ForwardModule, PLearn::GradNNetLayerModule, PLearn::IdentityModule, PLearn::LayerCostModule, PLearn::LinearCombinationModule, PLearn::LinearFilterModule, PLearn::LogaddOnBagsModule, PLearn::MatrixModule, PLearn::MaxSubsampling2DModule, PLearn::ModuleStackModule, PLearn::NetworkModule, PLearn::NLLCostModule, PLearn::NullModule, PLearn::OnBagsModule, PLearn::ProcessInputCostModule, PLearn::RBMBinomialLayer, PLearn::RBMClassificationModule, PLearn::RBMConnection, PLearn::RBMConv2DConnection, PLearn::RBMDiagonalMatrixConnection, PLearn::RBMGaussianLayer, PLearn::RBMLateralBinomialLayer, PLearn::RBMLayer, PLearn::RBMLocalMultinomialLayer, PLearn::RBMMatrixConnection, PLearn::RBMMatrixConnectionNatGrad, PLearn::RBMMatrixTransposeConnection, PLearn::RBMMixedConnection, PLearn::RBMMixedLayer, PLearn::RBMModule, PLearn::RBMMultinomialLayer, PLearn::RBMMultitaskClassificationModule, PLearn::RBMRateLayer, PLearn::RBMTruncExpLayer, PLearn::RBMWoodsLayer, PLearn::ScaleGradientModule, PLearn::ShuntingNNetLayerModule, PLearn::SoftmaxModule, PLearn::SoftmaxNLLCostModule, PLearn::SplitModule, PLearn::SquaredErrorCostModule, PLearn::Subsampling2DModule, PLearn::Supersampling2DModule, PLearn::TanhModule, PLearn::VBoundDBN2, PLearn::NnlmOutputLayer, and PLearn::NnlmWordRepresentationLayer.

Definition at line 268 of file OnlineLearningModule.cc.

References PLearn::deepCopyField(), PLearn::Object::makeDeepCopyFromShallowCopy(), port_sizes, random_gen, tmp_input_diag_hessian, tmp_input_gradient, and tmpm_input_gradient.

Referenced by PLearn::UndirectedSoftmaxModule::makeDeepCopyFromShallowCopy(), PLearn::TreeDBNModule::makeDeepCopyFromShallowCopy(), PLearn::TanhModule::makeDeepCopyFromShallowCopy(), PLearn::Supersampling2DModule::makeDeepCopyFromShallowCopy(), PLearn::Subsampling2DModule::makeDeepCopyFromShallowCopy(), PLearn::StackedModulesModule::makeDeepCopyFromShallowCopy(), PLearn::SquaredErrModule::makeDeepCopyFromShallowCopy(), PLearn::SplitModule::makeDeepCopyFromShallowCopy(), PLearn::ShuntingNNetLayerModule::makeDeepCopyFromShallowCopy(), PLearn::ScaleGradientModule::makeDeepCopyFromShallowCopy(), PLearn::RBMParameters::makeDeepCopyFromShallowCopy(), PLearn::RBMMultitaskClassificationModule::makeDeepCopyFromShallowCopy(), PLearn::RBMModule::makeDeepCopyFromShallowCopy(), PLearn::RBMConnection::makeDeepCopyFromShallowCopy(), PLearn::OnBagsModule::makeDeepCopyFromShallowCopy(), PLearn::NullModule::makeDeepCopyFromShallowCopy(), PLearn::NnlmWordRepresentationLayer::makeDeepCopyFromShallowCopy(), PLearn::NnlmOutputLayer::makeDeepCopyFromShallowCopy(), PLearn::NLLErrModule::makeDeepCopyFromShallowCopy(), PLearn::NetworkModule::makeDeepCopyFromShallowCopy(), PLearn::ModuleStackModule::makeDeepCopyFromShallowCopy(), PLearn::MaxSubsampling2DModule::makeDeepCopyFromShallowCopy(), PLearn::MatrixModule::makeDeepCopyFromShallowCopy(), PLearn::LinearFilterModule::makeDeepCopyFromShallowCopy(), PLearn::LinearCombinationModule::makeDeepCopyFromShallowCopy(), PLearn::KLp0p1RBMModule::makeDeepCopyFromShallowCopy(), PLearn::GradNNetLayerModule::makeDeepCopyFromShallowCopy(), PLearn::ForwardModule::makeDeepCopyFromShallowCopy(), PLearn::CostModule::makeDeepCopyFromShallowCopy(), PLearn::Convolution2DModule::makeDeepCopyFromShallowCopy(), PLearn::BinarizeModule::makeDeepCopyFromShallowCopy(), and PLearn::BackConvolution2DModule::makeDeepCopyFromShallowCopy().

Here is the call graph for this function:

Here is the caller graph for this function:

map< string, Mat > PLearn::OnlineLearningModule::namedBpropAccUpdate ( map< string, Mat > &  values,
map< string, Mat > &  gradients,
TVec< string >  additional_input_gradients 
) [virtual]

Definition at line 416 of file OnlineLearningModule.cc.

References PLearn::TVec< T >::begin(), bpropAccUpdate(), for(), getPortIndex(), getPorts(), i, PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), nPorts(), PLearn::TMat< T >::resize(), and PLearn::TMat< T >::width().

Referenced by declareMethods().

{
    map<string,Mat> all_gradients;
    TVec<string> port_names = getPorts();
    TVec<Mat*> ports_value(nPorts());
    TVec<Mat*> ports_gradient(nPorts());
    map<string,Mat>::iterator it=values.begin();
    for (;it!=values.end();++it)
        ports_value[getPortIndex(it->first)]= &it->second;
    it=gradients.begin();
    for (;it!=gradients.end();++it)
        ports_gradient[getPortIndex(it->first)]= &it->second;
    for (int i=0;i<additional_input_gradients.length();i++)
    {
        Mat port_value = values[additional_input_gradients[i]];
        // the additional input gradients are to be initialized as zero matrices
        Mat* port_gradient = new Mat(port_value.length(),port_value.width());
        port_gradient->resize(0,port_value.width());
        ports_gradient[getPortIndex(additional_input_gradients[i])]= port_gradient;
    }
    bpropAccUpdate(ports_value,ports_gradient);
    for (it=gradients.begin();it!=gradients.end();++it)
        all_gradients[it->first]=it->second;
    for (int i=0;i<additional_input_gradients.length();i++)
        all_gradients[additional_input_gradients[i]]=
            *ports_gradient[getPortIndex(additional_input_gradients[i])];
    return all_gradients;
}

Here is the call graph for this function:

Here is the caller graph for this function:

map< string, Mat > PLearn::OnlineLearningModule::namedFprop ( map< string, Mat > &  inputs,
TVec< string >  wanted_outputs 
) [virtual]

Definition at line 390 of file OnlineLearningModule.cc.

References for(), fprop(), getPortIndex(), getPorts(), i, PLearn::TVec< T >::length(), nPorts(), and PLASSERT_MSG.

Referenced by declareMethods().

{
    map<string,Mat> outputs;
    TVec<string> port_names = getPorts();
    TVec<Mat*> ports_value(nPorts());
    map<string,Mat>::iterator it=inputs.begin();
    for (;it!=inputs.end();++it)
    {
        int port_index=getPortIndex(it->first);
        PLASSERT_MSG(port_index>=0,"Unknown port name: "+it->first);
        ports_value[port_index]= &it->second;
    }
    for (int i=0;i<wanted_outputs.length();i++)
    {
        int port_index=getPortIndex(wanted_outputs[i]);
        PLASSERT_MSG(port_index>=0,"Unknown port name: "+wanted_outputs[i]);
        ports_value[port_index]= new Mat(0,0);
    }
    fprop(ports_value);
    for (it=inputs.begin();it!=inputs.end();++it)
        outputs[it->first]=it->second;
    for (int i=0;i<wanted_outputs.length();i++)
        outputs[wanted_outputs[i]]= *ports_value[getPortIndex(wanted_outputs[i])];
    return outputs;
}

Here is the call graph for this function:

Here is the caller graph for this function:

int PLearn::OnlineLearningModule::nPorts ( )

Return the number of ports in the module.

Reimplemented in PLearn::SplitModule.

Definition at line 543 of file OnlineLearningModule.cc.

References getPorts(), and PLearn::TVec< T >::length().

Referenced by PLearn::OnBagsModule::bpropAccUpdate(), PLearn::LayerCostModule::bpropAccUpdate(), PLearn::BinarizeModule::bpropAccUpdate(), PLearn::KLp0p1RBMModule::bpropAccUpdate(), PLearn::RBMSparse1DMatrixConnection::bpropAccUpdate(), PLearn::NullModule::bpropAccUpdate(), PLearn::RBMConv2DConnection::bpropAccUpdate(), PLearn::MaxSubsampling2DModule::bpropAccUpdate(), PLearn::RBMModule::bpropAccUpdate(), PLearn::Convolution2DModule::bpropAccUpdate(), PLearn::RBMMatrixConnection::bpropAccUpdate(), PLearn::TreeDBNModule::bpropAccUpdate(), PLearn::Convolution2DModule::build_(), PLearn::OnBagsModule::build_(), PLearn::RBMModule::build_(), PLearn::MaxSubsampling2DModule::build_(), PLearn::KLp0p1RBMModule::build_(), PLearn::RBMConnection::build_(), PLearn::LayerCostModule::build_(), PLearn::BinarizeModule::fprop(), PLearn::MaxSubsampling2DModule::fprop(), PLearn::TreeDBNModule::fprop(), PLearn::Convolution2DModule::fprop(), PLearn::NullModule::fprop(), PLearn::LayerCostModule::fprop(), fprop(), PLearn::CostModule::fprop(), PLearn::OnBagsModule::fprop(), PLearn::RBMModule::fprop(), PLearn::KLp0p1RBMModule::fprop(), PLearn::TreeDBNModule::full_fprop(), PLearn::CostModule::getPortSizes(), getPortSizes(), PLearn::TreeDBNModule::initSampling(), namedBpropAccUpdate(), namedFprop(), and PLearn::TreeDBNModule::sample().

{
    return getPorts().length();
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::OnlineLearningModule::setLearningRate ( real  dynamic_learning_rate) [virtual]

Reimplemented in PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::CrossEntropyCostModule, PLearn::KLp0p1RBMModule, PLearn::ForwardModule, PLearn::GradNNetLayerModule, PLearn::LayerCostModule, PLearn::LinearFilterModule, PLearn::MaxSubsampling2DModule, PLearn::ModuleStackModule, PLearn::NLLCostModule, PLearn::OnBagsModule, PLearn::ProcessInputCostModule, PLearn::RBMConnection, PLearn::RBMLayer, PLearn::RBMMixedConnection, PLearn::RBMMixedLayer, PLearn::RBMModule, PLearn::ScaleGradientModule, PLearn::ShuntingNNetLayerModule, PLearn::SoftmaxModule, and PLearn::SoftmaxNLLCostModule.

Definition at line 247 of file OnlineLearningModule.cc.

References PLearn::Object::classname(), and PLWARNING.

Referenced by declareMethods().

{
    PLWARNING("In OnlineLearningModule::setLearningRate - The derived class "
            "(%s) does not have a learning rate that can be changed from "
            "outside. If it should have one, please implement setLearningRate "
            "in it", classname().c_str());
}

Here is the call graph for this function:

Here is the caller graph for this function:


Member Data Documentation

Reimplemented from PLearn::Object.

Reimplemented in PLearn::ArgmaxModule, PLearn::BackConvolution2DModule, PLearn::BinarizeModule, PLearn::ClassErrorCostModule, PLearn::CombiningCostsModule, PLearn::Convolution2DModule, PLearn::CostModule, PLearn::CrossEntropyCostModule, PLearn::NLLErrModule, PLearn::RBMBinomialLayer, PLearn::RBMConv2DLLParameters, PLearn::RBMGaussianLayer, PLearn::RBMGenericParameters, PLearn::RBMJointGenericParameters, PLearn::RBMJointLLParameters, PLearn::RBMLayer, PLearn::RBMLLParameters, PLearn::RBMLQParameters, PLearn::RBMMixedLayer, PLearn::RBMMultinomialLayer, PLearn::RBMParameters, PLearn::RBMQLParameters, PLearn::RBMTruncExpLayer, PLearn::SquaredErrModule, PLearn::StackedModulesModule, PLearn::UndirectedSoftmaxModule, PLearn::KLp0p1RBMModule, PLearn::TreeDBNModule, PLearn::ForwardModule, PLearn::GradNNetLayerModule, PLearn::IdentityModule, PLearn::LayerCostModule, PLearn::LinearCombinationModule, PLearn::LinearFilterModule, PLearn::LogaddOnBagsModule, PLearn::MatrixModule, PLearn::MaxSubsampling2DModule, PLearn::ModuleStackModule, PLearn::NetworkModule, PLearn::NLLCostModule, PLearn::NullModule, PLearn::OnBagsModule, PLearn::ProcessInputCostModule, PLearn::RBMClassificationModule, PLearn::RBMConnection, PLearn::RBMConv2DConnection, PLearn::RBMDiagonalMatrixConnection, PLearn::RBMLateralBinomialLayer, PLearn::RBMLocalMultinomialLayer, PLearn::RBMMatrixConnection, PLearn::RBMMatrixConnectionNatGrad, PLearn::RBMMatrixTransposeConnection, PLearn::RBMMixedConnection, PLearn::RBMModule, PLearn::RBMMultitaskClassificationModule, PLearn::RBMRateLayer, PLearn::RBMSparse1DMatrixConnection, PLearn::RBMWoodsLayer, PLearn::ScaleGradientModule, PLearn::ShuntingNNetLayerModule, PLearn::SoftmaxModule, PLearn::SoftmaxNLLCostModule, PLearn::SplitModule, PLearn::SquaredErrorCostModule, PLearn::Subsampling2DModule, PLearn::Supersampling2DModule, PLearn::TanhModule, PLearn::VBoundDBN2, PLearn::NnlmOutputLayer, and PLearn::NnlmWordRepresentationLayer.

Definition at line 292 of file OnlineLearningModule.h.

compute simpler estimation of diagonal of the input Hessian matrix, using only the first (positive) part in: d²C/dx² ~= d²C/dy² (dy/dx)² [+ dC/dy d²y/dx²]

Definition at line 84 of file OnlineLearningModule.h.

Referenced by PLearn::TanhModule::bbpropUpdate(), PLearn::SquaredErrModule::bbpropUpdate(), PLearn::NLLErrModule::bbpropUpdate(), PLearn::StackedModulesModule::buildLayers(), and declareOptions().

Path of the directory associated with this module, in which it should save any file it wishes to create.

The directory will be created if it does not already exist. If expdir is the empty string (the default), then the module should not create *any* file.

Definition at line 92 of file OnlineLearningModule.h.

Referenced by declareOptions().

input size

Definition at line 74 of file OnlineLearningModule.h.

Referenced by PLearn::NnlmOutputLayer::addCandidateContribution(), PLearn::NnlmOutputLayer::applyAllClassVars(), PLearn::NnlmOutputLayer::applyMuAndSigmaEmpiricalUpdate(), PLearn::NnlmOutputLayer::applyMuCandidateGradient(), PLearn::NnlmOutputLayer::applyMuGradient(), PLearn::NnlmOutputLayer::applyMuTargetGradient(), PLearn::NnlmOutputLayer::applySigmaCandidateGradient(), PLearn::NnlmOutputLayer::applySigmaGradient(), PLearn::NnlmOutputLayer::applySigmaTargetGradient(), PLearn::BackConvolution2DModule::bbpropUpdate(), PLearn::Subsampling2DModule::bbpropUpdate(), PLearn::Supersampling2DModule::bbpropUpdate(), PLearn::CostModule::bbpropUpdate(), PLearn::TanhModule::bbpropUpdate(), PLearn::CombiningCostsModule::bbpropUpdate(), PLearn::NLLErrModule::bbpropUpdate(), PLearn::ProcessInputCostModule::bbpropUpdate(), PLearn::SquaredErrModule::bbpropUpdate(), PLearn::Convolution2DModule::bbpropUpdate(), PLearn::ModuleStackModule::bbpropUpdate(), PLearn::OnBagsModule::bpropAccUpdate(), PLearn::LayerCostModule::bpropAccUpdate(), PLearn::RBMLateralBinomialLayer::bpropNLL(), PLearn::RBMBinomialLayer::bpropNLL(), PLearn::RBMMultinomialLayer::bpropNLL(), PLearn::RBMWoodsLayer::bpropNLL(), PLearn::RBMMixedLayer::bpropNLL(), PLearn::RBMRateLayer::bpropNLL(), PLearn::RBMLocalMultinomialLayer::bpropNLL(), PLearn::RBMGaussianLayer::bpropNLL(), PLearn::BackConvolution2DModule::bpropUpdate(), PLearn::CombiningCostsModule::bpropUpdate(), PLearn::LinearFilterModule::bpropUpdate(), PLearn::OnBagsModule::bpropUpdate(), PLearn::Convolution2DModule::bpropUpdate(), PLearn::CostModule::bpropUpdate(), PLearn::LayerCostModule::bpropUpdate(), PLearn::ProcessInputCostModule::bpropUpdate(), PLearn::ShuntingNNetLayerModule::bpropUpdate(), PLearn::SquaredErrModule::bpropUpdate(), PLearn::Subsampling2DModule::bpropUpdate(), PLearn::ModuleStackModule::bpropUpdate(), PLearn::UndirectedSoftmaxModule::bpropUpdate(), PLearn::NnlmOutputLayer::bpropUpdate(), PLearn::GradNNetLayerModule::bpropUpdate(), PLearn::NnlmWordRepresentationLayer::bpropUpdate(), PLearn::Supersampling2DModule::bpropUpdate(), PLearn::RBMMultitaskClassificationModule::bpropUpdate(), PLearn::NLLErrModule::bpropUpdate(), PLearn::TanhModule::bpropUpdate(), PLearn::GradNNetLayerModule::build_(), PLearn::RBMMultitaskClassificationModule::build_(), PLearn::ProcessInputCostModule::build_(), PLearn::Convolution2DModule::build_(), PLearn::StackedModulesModule::build_(), PLearn::Supersampling2DModule::build_(), PLearn::NnlmWordRepresentationLayer::build_(), PLearn::RBMJointLLParameters::build_(), PLearn::RBMParameters::build_(), PLearn::CombiningCostsModule::build_(), PLearn::OnBagsModule::build_(), PLearn::NnlmOutputLayer::build_(), PLearn::ShuntingNNetLayerModule::build_(), PLearn::BackConvolution2DModule::build_(), PLearn::MaxSubsampling2DModule::build_(), PLearn::RBMJointGenericParameters::build_(), PLearn::NLLErrModule::build_(), PLearn::UndirectedSoftmaxModule::build_(), PLearn::RBMMatrixTransposeConnection::build_(), PLearn::TanhModule::build_(), PLearn::ClassErrorCostModule::build_(), PLearn::LinearFilterModule::build_(), PLearn::RBMConnection::build_(), PLearn::LayerCostModule::build_(), PLearn::ModuleStackModule::build_(), PLearn::SquaredErrModule::build_(), PLearn::SplitModule::build_(), PLearn::Subsampling2DModule::build_(), PLearn::TopDownAsymetricDeepNetwork::build_output_layer_and_cost(), PLearn::StackedFocusedAutoassociatorsNet::build_output_layer_and_cost(), PLearn::DiscriminativeDeepBeliefNet::build_output_layer_and_cost(), PLearn::StackedModulesModule::buildLayers(), PLearn::NnlmOutputLayer::compute_nl_p_rt(), PLearn::NnlmOutputLayer::computeApproxDiscriminantGradient(), PLearn::LayerCostModule::computeCorrelationStatistics(), PLearn::NnlmOutputLayer::computeDiscriminantGradient(), PLearn::LayerCostModule::computeHisto(), PLearn::LayerCostModule::computeKLdiv(), PLearn::NnlmOutputLayer::computeNonDiscriminantGradient(), PLearn::LayerCostModule::computePascalStatistics(), PLearn::LayerCostModule::computeSafeHisto(), PLearn::Subsampling2DModule::declareOptions(), PLearn::NetworkModule::declareOptions(), PLearn::RBMConnection::declareOptions(), PLearn::SplitModule::declareOptions(), declareOptions(), PLearn::Convolution2DModule::declareOptions(), PLearn::MaxSubsampling2DModule::declareOptions(), PLearn::ModuleStackModule::declareOptions(), PLearn::Supersampling2DModule::declareOptions(), PLearn::CombiningCostsModule::declareOptions(), PLearn::BackConvolution2DModule::declareOptions(), PLearn::LinearFilterModule::forget(), PLearn::ShuntingNNetLayerModule::forget(), PLearn::UndirectedSoftmaxModule::forget(), PLearn::GradNNetLayerModule::forget(), PLearn::ShuntingNNetLayerModule::fprop(), PLearn::ClassErrorCostModule::fprop(), PLearn::RBMMultinomialLayer::fprop(), PLearn::RBMWoodsLayer::fprop(), PLearn::Subsampling2DModule::fprop(), PLearn::UndirectedSoftmaxModule::fprop(), PLearn::Convolution2DModule::fprop(), PLearn::GradNNetLayerModule::fprop(), PLearn::RBMBinomialLayer::fprop(), PLearn::RBMMixedLayer::fprop(), PLearn::SplitModule::fprop(), PLearn::LayerCostModule::fprop(), PLearn::ModuleStackModule::fprop(), PLearn::RBMGaussianLayer::fprop(), PLearn::RBMMultitaskClassificationModule::fprop(), PLearn::ProcessInputCostModule::fprop(), PLearn::RBMRateLayer::fprop(), PLearn::StackedModulesModule::fprop(), PLearn::LinearFilterModule::fprop(), PLearn::RBMLayer::fprop(), PLearn::RBMLocalMultinomialLayer::fprop(), PLearn::TanhModule::fprop(), PLearn::CostModule::fprop(), PLearn::NnlmWordRepresentationLayer::fprop(), PLearn::OnBagsModule::fprop(), PLearn::SquaredErrModule::fprop(), PLearn::Supersampling2DModule::fprop(), PLearn::CombiningCostsModule::fprop(), PLearn::RBMTruncExpLayer::fprop(), PLearn::BackConvolution2DModule::fprop(), PLearn::NLLErrModule::fprop(), PLearn::RBMLateralBinomialLayer::fprop(), PLearn::RBMMultinomialLayer::fpropNLL(), PLearn::RBMLateralBinomialLayer::fpropNLL(), PLearn::RBMLayer::fpropNLL(), PLearn::RBMBinomialLayer::fpropNLL(), PLearn::RBMGaussianLayer::fpropNLL(), PLearn::RBMWoodsLayer::fpropNLL(), PLearn::RBMMixedLayer::fpropNLL(), PLearn::RBMRateLayer::fpropNLL(), PLearn::RBMLocalMultinomialLayer::fpropNLL(), PLearn::CostModule::getPortSizes(), PLearn::SplitModule::getPortSizes(), getPortSizes(), PLearn::NLLErrModule::getTarget(), PLearn::NnlmOutputLayer::resetAllClassVars(), PLearn::NnlmOutputLayer::resetParameters(), PLearn::UndirectedSoftmaxModule::resetWeights(), and PLearn::NnlmOutputLayer::updateClassVars().

output size

Definition at line 77 of file OnlineLearningModule.h.

Referenced by PLearn::BackConvolution2DModule::bbpropUpdate(), PLearn::Subsampling2DModule::bbpropUpdate(), PLearn::Supersampling2DModule::bbpropUpdate(), PLearn::TanhModule::bbpropUpdate(), PLearn::GradNNetLayerModule::bbpropUpdate(), PLearn::LinearFilterModule::bbpropUpdate(), PLearn::NLLErrModule::bbpropUpdate(), PLearn::SquaredErrModule::bbpropUpdate(), PLearn::UndirectedSoftmaxModule::bbpropUpdate(), PLearn::Convolution2DModule::bbpropUpdate(), PLearn::ModuleStackModule::bbpropUpdate(), PLearn::OnBagsModule::bpropAccUpdate(), PLearn::BackConvolution2DModule::bpropUpdate(), PLearn::LinearFilterModule::bpropUpdate(), PLearn::OnBagsModule::bpropUpdate(), PLearn::Convolution2DModule::bpropUpdate(), PLearn::ShuntingNNetLayerModule::bpropUpdate(), PLearn::SquaredErrModule::bpropUpdate(), PLearn::Subsampling2DModule::bpropUpdate(), PLearn::ModuleStackModule::bpropUpdate(), PLearn::UndirectedSoftmaxModule::bpropUpdate(), PLearn::NnlmOutputLayer::bpropUpdate(), PLearn::GradNNetLayerModule::bpropUpdate(), PLearn::NnlmWordRepresentationLayer::bpropUpdate(), PLearn::Supersampling2DModule::bpropUpdate(), PLearn::RBMMultitaskClassificationModule::bpropUpdate(), PLearn::NLLErrModule::bpropUpdate(), PLearn::TanhModule::bpropUpdate(), PLearn::GradNNetLayerModule::build_(), PLearn::RBMMultitaskClassificationModule::build_(), PLearn::ProcessInputCostModule::build_(), PLearn::RBMLQParameters::build_(), PLearn::Convolution2DModule::build_(), PLearn::StackedModulesModule::build_(), PLearn::RBMConv2DConnection::build_(), PLearn::NnlmWordRepresentationLayer::build_(), PLearn::RBMJointLLParameters::build_(), PLearn::RBMParameters::build_(), PLearn::CombiningCostsModule::build_(), PLearn::OnBagsModule::build_(), PLearn::NnlmOutputLayer::build_(), PLearn::ShuntingNNetLayerModule::build_(), PLearn::MaxSubsampling2DModule::build_(), PLearn::RBMJointGenericParameters::build_(), PLearn::NLLErrModule::build_(), PLearn::UndirectedSoftmaxModule::build_(), PLearn::RBMMatrixTransposeConnection::build_(), PLearn::TanhModule::build_(), PLearn::ClassErrorCostModule::build_(), PLearn::LinearFilterModule::build_(), PLearn::RBMConnection::build_(), PLearn::RBMGenericParameters::build_(), PLearn::ModuleStackModule::build_(), PLearn::RBMLLParameters::build_(), PLearn::SquaredErrModule::build_(), PLearn::RBMConv2DLLParameters::build_(), PLearn::RBMQLParameters::build_(), PLearn::SplitModule::build_(), PLearn::Subsampling2DModule::build_(), PLearn::TopDownAsymetricDeepNetwork::build_output_layer_and_cost(), PLearn::StackedFocusedAutoassociatorsNet::build_output_layer_and_cost(), PLearn::DiscriminativeDeepBeliefNet::build_output_layer_and_cost(), PLearn::ClassErrorCostModule::ClassErrorCostModule(), PLearn::TanhModule::declareOptions(), PLearn::Subsampling2DModule::declareOptions(), PLearn::NetworkModule::declareOptions(), PLearn::RBMConnection::declareOptions(), PLearn::SplitModule::declareOptions(), declareOptions(), PLearn::Convolution2DModule::declareOptions(), PLearn::MaxSubsampling2DModule::declareOptions(), PLearn::ModuleStackModule::declareOptions(), PLearn::LogaddOnBagsModule::declareOptions(), PLearn::Supersampling2DModule::declareOptions(), PLearn::CostModule::declareOptions(), PLearn::BackConvolution2DModule::declareOptions(), PLearn::SoftmaxModule::declareOptions(), PLearn::LinearFilterModule::forget(), PLearn::ShuntingNNetLayerModule::forget(), PLearn::GradNNetLayerModule::forget(), PLearn::ShuntingNNetLayerModule::fprop(), PLearn::ClassErrorCostModule::fprop(), PLearn::RBMMultinomialLayer::fprop(), PLearn::RBMWoodsLayer::fprop(), PLearn::Subsampling2DModule::fprop(), PLearn::Convolution2DModule::fprop(), PLearn::GradNNetLayerModule::fprop(), PLearn::RBMBinomialLayer::fprop(), PLearn::RBMMixedLayer::fprop(), PLearn::RBMJointGenericParameters::fprop(), PLearn::RBMJointLLParameters::fprop(), PLearn::LayerCostModule::fprop(), PLearn::RBMGaussianLayer::fprop(), PLearn::RBMMultitaskClassificationModule::fprop(), PLearn::RBMConnection::fprop(), PLearn::RBMRateLayer::fprop(), PLearn::StackedModulesModule::fprop(), PLearn::LinearFilterModule::fprop(), PLearn::RBMLayer::fprop(), PLearn::RBMLocalMultinomialLayer::fprop(), PLearn::TanhModule::fprop(), PLearn::CostModule::fprop(), PLearn::NnlmWordRepresentationLayer::fprop(), PLearn::OnBagsModule::fprop(), PLearn::SquaredErrModule::fprop(), PLearn::Supersampling2DModule::fprop(), PLearn::CombiningCostsModule::fprop(), PLearn::RBMTruncExpLayer::fprop(), PLearn::BackConvolution2DModule::fprop(), PLearn::NLLErrModule::fprop(), PLearn::RBMLateralBinomialLayer::fprop(), PLearn::RBMParameters::fprop(), PLearn::CostModule::getPortSizes(), getPortSizes(), PLearn::LayerCostModule::LayerCostModule(), PLearn::NLLErrModule::NLLErrModule(), PLearn::UndirectedSoftmaxModule::resetWeights(), and PLearn::SquaredErrModule::SquaredErrModule().

optional random generator, possibly shared among several modules

Reimplemented in PLearn::RBMLayer.

Definition at line 95 of file OnlineLearningModule.h.

Referenced by PLearn::RBMMultitaskClassificationModule::build_(), PLearn::ProcessInputCostModule::build_(), PLearn::NetworkModule::build_(), PLearn::StackedModulesModule::build_(), PLearn::CombiningCostsModule::build_(), PLearn::RBMModule::build_(), PLearn::UndirectedSoftmaxModule::build_(), PLearn::RBMMatrixTransposeConnection::build_(), PLearn::ForwardModule::build_(), PLearn::KLp0p1RBMModule::build_(), PLearn::ModuleStackModule::build_(), PLearn::TreeDBNModule::build_(), PLearn::TopDownAsymetricDeepNetwork::build_output_layer_and_cost(), PLearn::StackedFocusedAutoassociatorsNet::build_output_layer_and_cost(), PLearn::DiscriminativeDeepBeliefNet::build_output_layer_and_cost(), PLearn::StackedModulesModule::buildLayers(), declareOptions(), PLearn::BackConvolution2DModule::forget(), PLearn::RBMSparse1DMatrixConnection::forget(), PLearn::Convolution2DModule::forget(), PLearn::LinearFilterModule::forget(), PLearn::ProcessInputCostModule::forget(), PLearn::RBMMatrixConnection::forget(), PLearn::Supersampling2DModule::forget(), PLearn::ShuntingNNetLayerModule::forget(), PLearn::UndirectedSoftmaxModule::forget(), PLearn::GradNNetLayerModule::forget(), PLearn::NetworkModule::forget(), PLearn::RBMMultitaskClassificationModule::forget(), PLearn::ModuleStackModule::forget(), PLearn::RBMMatrixTransposeConnection::forget(), PLearn::RBMQLParameters::forget(), PLearn::RBMLLParameters::forget(), PLearn::NnlmWordRepresentationLayer::forget(), PLearn::Subsampling2DModule::forget(), PLearn::CombiningCostsModule::forget(), PLearn::RBMLQParameters::forget(), PLearn::RBMConv2DConnection::forget(), PLearn::StackedModulesModule::forget(), PLearn::RBMConv2DLLParameters::forget(), PLearn::RBMGenericParameters::forget(), PLearn::RBMDiagonalMatrixConnection::forget(), PLearn::BinarizeModule::fprop(), PLearn::RBMModule::fprop(), and makeDeepCopyFromShallowCopy().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines