PLearn 0.1
|
Wraps a stack of OnlineLearningModule, which are layers. More...
#include <StackedModulesModule.h>
Public Member Functions | |
StackedModulesModule () | |
Default constructor. | |
virtual void | fprop (const Vec &input, Vec &output) const |
given the input, compute the output (possibly resize it appropriately) | |
virtual void | bpropUpdate (const Vec &input, const Vec &output, Vec &input_gradient, const Vec &output_gradient) |
Adapt based on the output gradient: this method should only be called just after a corresponding fprop; it should be called with the same arguments as fprop for the first two arguments (and output should not have been modified since then). | |
virtual void | bbpropUpdate (const Vec &input, const Vec &output, Vec &input_gradient, const Vec &output_gradient, Vec &input_diag_hessian, const Vec &output_diag_hessian) |
Similar to bpropUpdate, but adapt based also on the estimation of the diagonal of the Hessian matrix, and propagates this back. | |
virtual void | forget () |
reset the parameters to the state they would be BEFORE starting training. | |
virtual string | classname () const |
virtual OptionList & | getOptionList () const |
virtual OptionMap & | getOptionMap () const |
virtual RemoteMethodMap & | getRemoteMethodMap () const |
virtual StackedModulesModule * | deepCopy (CopiesMap &copies) const |
virtual void | build () |
Post-constructor. | |
virtual void | makeDeepCopyFromShallowCopy (CopiesMap &copies) |
Transforms a shallow copy into a deep copy. | |
Static Public Member Functions | |
static string | _classname_ () |
optionally perform some processing after training, or after a series of fprop/bpropUpdate calls to prepare the model for truly out-of-sample operation. | |
static OptionList & | _getOptionList_ () |
static RemoteMethodMap & | _getRemoteMethodMap_ () |
static Object * | _new_instance_for_typemap_ () |
static bool | _isa_ (const Object *o) |
static void | _static_initialize_ () |
static const PPath & | declaringFile () |
Public Attributes | |
TVec< PP< OnlineLearningModule > > | modules |
### declare public option fields (such as build options) here Start your comments with Doxygen-compatible comments such as //! | |
bool | last_layer_is_cost |
Indicates if the last layer is a cost layer (taking input and target as input, and outputing the cost we will minimize), allowing this module to behave the same way. | |
int | target_size |
If last_layer_is_cost, the size of the target. | |
TVec< Vec > | values |
stores the input and output values of the functions | |
Vec | cost_layer_input |
stores the input of the last module, and the target if there is one | |
TVec< Vec > | gradients |
stores the gradients | |
TVec< Vec > | diag_hessians |
stores the diagonal of Hessians | |
Static Public Attributes | |
static StaticInitializer | _static_initializer_ |
Static Protected Member Functions | |
static void | declareOptions (OptionList &ol) |
Declares the class options. | |
Protected Attributes | |
int | nmodules |
Number of module layers. | |
Private Types | |
typedef OnlineLearningModule | inherited |
Private Member Functions | |
void | build_ () |
This does the actual building. | |
void | buildOptions () |
void | buildLayers () |
Wraps a stack of OnlineLearningModule, which are layers.
The OnlineLearningModule's are disposed like superposed layers: outputs of module i are the inputs of module (i+1), the last layer is the output layer.
Definition at line 55 of file StackedModulesModule.h.
typedef OnlineLearningModule PLearn::StackedModulesModule::inherited [private] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 57 of file StackedModulesModule.h.
PLearn::StackedModulesModule::StackedModulesModule | ( | ) |
Default constructor.
Definition at line 53 of file StackedModulesModule.cc.
: last_layer_is_cost( false ), target_size( 0 ), nmodules( 0 ) { }
string PLearn::StackedModulesModule::_classname_ | ( | ) | [static] |
optionally perform some processing after training, or after a series of fprop/bpropUpdate calls to prepare the model for truly out-of-sample operation.
THE DEFAULT IMPLEMENTATION PROVIDED IN THE SUPER-CLASS DOES NOT DO ANYTHING. in case bpropUpdate does not do anything, make it known THE DEFAULT IMPLEMENTATION PROVIDED IN THE SUPER-CLASS RETURNS false;
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 51 of file StackedModulesModule.cc.
OptionList & PLearn::StackedModulesModule::_getOptionList_ | ( | ) | [static] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 51 of file StackedModulesModule.cc.
RemoteMethodMap & PLearn::StackedModulesModule::_getRemoteMethodMap_ | ( | ) | [static] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 51 of file StackedModulesModule.cc.
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 51 of file StackedModulesModule.cc.
Object * PLearn::StackedModulesModule::_new_instance_for_typemap_ | ( | ) | [static] |
Reimplemented from PLearn::Object.
Definition at line 51 of file StackedModulesModule.cc.
StaticInitializer StackedModulesModule::_static_initializer_ & PLearn::StackedModulesModule::_static_initialize_ | ( | ) | [static] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 51 of file StackedModulesModule.cc.
void PLearn::StackedModulesModule::bbpropUpdate | ( | const Vec & | input, |
const Vec & | output, | ||
Vec & | input_gradient, | ||
const Vec & | output_gradient, | ||
Vec & | input_diag_hessian, | ||
const Vec & | output_diag_hessian | ||
) | [virtual] |
Similar to bpropUpdate, but adapt based also on the estimation of the diagonal of the Hessian matrix, and propagates this back.
If these methods are defined, you can use them INSTEAD of bpropUpdate(...) N.B. A DEFAULT IMPLEMENTATION IS PROVIDED IN THE SUPER-CLASS, WHICH JUST CALLS bbpropUpdate(input, output, input_gradient, output_gradient, out_hess, in_hess) AND IGNORES INPUT HESSIAN AND INPUT GRADIENT. this version allows to obtain the input gradient and diag_hessian
If these methods are defined, you can use them INSTEAD of bpropUpdate(...) N.B. A DEFAULT IMPLEMENTATION IS PROVIDED IN THE SUPER-CLASS, WHICH RAISES A PLERROR.
Definition at line 260 of file StackedModulesModule.cc.
References PLearn::TVec< T >::copy(), diag_hessians, gradients, i, last_layer_is_cost, modules, nmodules, and values.
{ // If last_layer_is_cost, the gradient wrt it is 1 and hessian is 0 if( last_layer_is_cost ) { gradients[nmodules][0] = 1; diag_hessians[nmodules][0] = 1; } else { gradients[nmodules] << output_gradient; diag_hessians[nmodules] << output_diag_hessian; } // values should have the values given by fprop(), so // values[nmodules] should already be equal to output for( int i=nmodules-1 ; i>=0 ; i-- ) modules[i]->bbpropUpdate( values[i], values[i+1], gradients[i], gradients[i+1], diag_hessians[i], diag_hessians[i+1] ); input_gradient = gradients[0].copy(); input_diag_hessian = diag_hessians[0].copy(); }
void PLearn::StackedModulesModule::bpropUpdate | ( | const Vec & | input, |
const Vec & | output, | ||
Vec & | input_gradient, | ||
const Vec & | output_gradient | ||
) | [virtual] |
Adapt based on the output gradient: this method should only be called just after a corresponding fprop; it should be called with the same arguments as fprop for the first two arguments (and output should not have been modified since then).
this version allows to obtain the input gradient as well N.B.
Since sub-classes are supposed to learn ONLINE, the object is 'ready-to-be-used' just after any bpropUpdate. N.B. A DEFAULT IMPLEMENTATION IS PROVIDED IN THE SUPER-CLASS, WHICH JUST CALLS bpropUpdate(input, output, input_gradient, output_gradient) AND IGNORES INPUT GRADIENT. this version allows to obtain the input gradient as well
THE DEFAULT IMPLEMENTATION IN SUPER-CLASS JUST RAISES A PLERROR.
Definition at line 200 of file StackedModulesModule.cc.
References PLearn::TVec< T >::copy(), cost_layer_input, gradients, i, last_layer_is_cost, modules, nmodules, and values.
{ // If last_layer_is_cost, the gradient wrt it is 1 if( last_layer_is_cost ) gradients[nmodules][0] = 1; else gradients[nmodules] << output_gradient; // values should have the values given by fprop(), so // values[nmodules] should already be equal to output modules[nmodules-1]->bpropUpdate( cost_layer_input, values[nmodules], gradients[nmodules-1], gradients[nmodules] ); for( int i=nmodules-2 ; i>=0 ; i-- ) modules[i]->bpropUpdate( values[i], values[i+1], gradients[i], gradients[i+1] ); input_gradient = gradients[0].copy(); }
void PLearn::StackedModulesModule::build | ( | ) | [virtual] |
Post-constructor.
The normal implementation should call simply inherited::build(), then this class's build_(). This method should be callable again at later times, after modifying some option fields to change the "architecture" of the object.
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 157 of file StackedModulesModule.cc.
References PLearn::OnlineLearningModule::build(), and build_().
{ inherited::build(); build_(); }
void PLearn::StackedModulesModule::build_ | ( | ) | [private] |
This does the actual building.
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 92 of file StackedModulesModule.cc.
References buildLayers(), PLearn::OnlineLearningModule::input_size, last_layer_is_cost, PLearn::TVec< T >::length(), modules, nmodules, PLearn::OnlineLearningModule::output_size, PLASSERT, PLERROR, PLearn::OnlineLearningModule::random_gen, and target_size.
Referenced by build().
{ // initialize random generator from seed if( !random_gen ) random_gen = new PRandom(); else random_gen->manual_seed( random_gen->seed_ ); // get some options nmodules = modules.length(); if( nmodules == 0 ) return; if( last_layer_is_cost && target_size <= 0 ) PLERROR("StackedModulesModule::build_() - Please provide a target_size" " > 0\n" "(is '%d').\n", target_size ); if( !last_layer_is_cost ) target_size = 0; PLASSERT( modules[0]->input_size >= 0 ); input_size = modules[0]->input_size + target_size; // int last_module_output_size = modules[nmodules-1]->output_size; // if( last_layer_is_cost ) // last_module_output_size = 1; output_size = modules[nmodules-1]->output_size; // build the modules buildLayers(); }
void PLearn::StackedModulesModule::buildLayers | ( | ) | [private] |
Definition at line 126 of file StackedModulesModule.cc.
References cost_layer_input, diag_hessians, PLearn::OnlineLearningModule::estimate_simpler_diag_hessian, gradients, i, PLearn::OnlineLearningModule::input_size, last_layer_is_cost, modules, nmodules, PLearn::OnlineLearningModule::random_gen, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), target_size, and values.
Referenced by build_().
{ // first values will be "input" values int size = input_size - target_size; values.resize( nmodules+1 ); values[0].resize( size ); gradients.resize( nmodules+1 ); gradients[0].resize( size ); // TODO: use it only if we actually use bbprop? diag_hessians.resize( nmodules+1 ); diag_hessians[0].resize( size ); for( int i=0 ; i<nmodules ; i++ ) { modules[i]->estimate_simpler_diag_hessian = estimate_simpler_diag_hessian; modules[i]->random_gen = random_gen; modules[i]->build(); size = modules[i]->output_size; values[i+1].resize( size ); gradients[i+1].resize( size ); diag_hessians[i+1].resize( size ); } // stores the input of the last module, and the target if there is one cost_layer_input = values[nmodules-1]; if( last_layer_is_cost ) cost_layer_input.resize( cost_layer_input.size() + target_size ); }
void PLearn::StackedModulesModule::buildOptions | ( | ) | [private] |
string PLearn::StackedModulesModule::classname | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 51 of file StackedModulesModule.cc.
void PLearn::StackedModulesModule::declareOptions | ( | OptionList & | ol | ) | [static, protected] |
Declares the class options.
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 60 of file StackedModulesModule.cc.
References PLearn::OptionBase::buildoption, PLearn::declareOption(), PLearn::OnlineLearningModule::declareOptions(), last_layer_is_cost, PLearn::OptionBase::learntoption, modules, nmodules, and target_size.
{ /* declareOption(ol, "", &StackedModulesModule::, OptionBase::buildoption, ""); */ declareOption(ol, "modules", &StackedModulesModule::modules, OptionBase::buildoption, "Underlying layers of the Module"); declareOption(ol, "last_layer_is_cost", &StackedModulesModule::last_layer_is_cost, OptionBase::buildoption, "Indicates if the last layer is a cost layer (taking input" " and target\n" "as input, and outputing the cost we will minimize)," " allowing this\n" "module to behave the same way.\n"); declareOption(ol, "target_size", &StackedModulesModule::target_size, OptionBase::buildoption, "If last_layer_is_cost, the size of the target"); declareOption(ol, "nmodules", &StackedModulesModule::nmodules, OptionBase::learntoption, "Number of module layers"); // Now call the parent class' declareOptions inherited::declareOptions(ol); }
static const PPath& PLearn::StackedModulesModule::declaringFile | ( | ) | [inline, static] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 148 of file StackedModulesModule.h.
:
//##### Protected Options ###############################################
StackedModulesModule * PLearn::StackedModulesModule::deepCopy | ( | CopiesMap & | copies | ) | const [virtual] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 51 of file StackedModulesModule.cc.
void PLearn::StackedModulesModule::forget | ( | ) | [virtual] |
reset the parameters to the state they would be BEFORE starting training.
Note that this method is necessarily called from build().
Implements PLearn::OnlineLearningModule.
Definition at line 225 of file StackedModulesModule.cc.
References PLearn::TVec< T >::clear(), diag_hessians, gradients, i, modules, nmodules, PLearn::OnlineLearningModule::random_gen, and values.
{ random_gen->manual_seed( random_gen->seed_ ); // reset inputs values[0].clear(); gradients[0].clear(); diag_hessians[0].clear(); // reset modules and outputs for( int i=0 ; i<nmodules ; i++ ) { modules[i]->forget(); values[i+1].clear(); gradients[i+1].clear(); diag_hessians[i+1].clear(); } }
given the input, compute the output (possibly resize it appropriately)
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 176 of file StackedModulesModule.cc.
References cost_layer_input, i, PLearn::OnlineLearningModule::input_size, last_layer_is_cost, modules, nmodules, PLearn::OnlineLearningModule::output_size, PLASSERT, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), PLearn::TVec< T >::subVec(), target_size, and values.
{ PLASSERT( input.size() == input_size ); PLASSERT( modules[0]->input_size + target_size == input_size ); int last_input_size = values[nmodules-1].size(); values[0] << input.subVec(0, input_size - target_size ); for( int i=0 ; i<nmodules-1 ; i++ ) modules[i]->fprop( values[i], values[i+1] ); if( last_layer_is_cost ) { cost_layer_input.subVec( last_input_size, target_size ) << input.subVec( input_size - target_size, target_size ); } modules[nmodules-1]->fprop( cost_layer_input, values[nmodules] ); output.resize( output_size ); output << values[ nmodules ]; }
OptionList & PLearn::StackedModulesModule::getOptionList | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 51 of file StackedModulesModule.cc.
OptionMap & PLearn::StackedModulesModule::getOptionMap | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 51 of file StackedModulesModule.cc.
RemoteMethodMap & PLearn::StackedModulesModule::getRemoteMethodMap | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 51 of file StackedModulesModule.cc.
void PLearn::StackedModulesModule::makeDeepCopyFromShallowCopy | ( | CopiesMap & | copies | ) | [virtual] |
Transforms a shallow copy into a deep copy.
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 164 of file StackedModulesModule.cc.
References cost_layer_input, PLearn::deepCopyField(), diag_hessians, gradients, PLearn::OnlineLearningModule::makeDeepCopyFromShallowCopy(), modules, and values.
{ inherited::makeDeepCopyFromShallowCopy(copies); deepCopyField(modules, copies); deepCopyField(values, copies); deepCopyField(cost_layer_input, copies); deepCopyField(gradients, copies); deepCopyField(diag_hessians, copies); }
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 148 of file StackedModulesModule.h.
stores the input of the last module, and the target if there is one
Definition at line 169 of file StackedModulesModule.h.
Referenced by bpropUpdate(), buildLayers(), fprop(), and makeDeepCopyFromShallowCopy().
stores the diagonal of Hessians
Definition at line 175 of file StackedModulesModule.h.
Referenced by bbpropUpdate(), buildLayers(), forget(), and makeDeepCopyFromShallowCopy().
stores the gradients
Definition at line 172 of file StackedModulesModule.h.
Referenced by bbpropUpdate(), bpropUpdate(), buildLayers(), forget(), and makeDeepCopyFromShallowCopy().
Indicates if the last layer is a cost layer (taking input and target as input, and outputing the cost we will minimize), allowing this module to behave the same way.
Definition at line 71 of file StackedModulesModule.h.
Referenced by bbpropUpdate(), bpropUpdate(), build_(), buildLayers(), declareOptions(), and fprop().
### declare public option fields (such as build options) here Start your comments with Doxygen-compatible comments such as //!
Underlying layers of the Module
Definition at line 66 of file StackedModulesModule.h.
Referenced by bbpropUpdate(), bpropUpdate(), build_(), buildLayers(), declareOptions(), forget(), fprop(), and makeDeepCopyFromShallowCopy().
int PLearn::StackedModulesModule::nmodules [protected] |
Number of module layers.
Definition at line 160 of file StackedModulesModule.h.
Referenced by bbpropUpdate(), bpropUpdate(), build_(), buildLayers(), declareOptions(), forget(), and fprop().
If last_layer_is_cost, the size of the target.
Definition at line 74 of file StackedModulesModule.h.
Referenced by build_(), buildLayers(), declareOptions(), and fprop().
stores the input and output values of the functions
Definition at line 166 of file StackedModulesModule.h.
Referenced by bbpropUpdate(), bpropUpdate(), buildLayers(), forget(), fprop(), and makeDeepCopyFromShallowCopy().