PLearn 0.1
|
#include <AutoScaledGradientOptimizer.h>
Public Member Functions | |
AutoScaledGradientOptimizer () | |
virtual string | classname () const |
virtual OptionList & | getOptionList () const |
virtual OptionMap & | getOptionMap () const |
virtual RemoteMethodMap & | getRemoteMethodMap () const |
virtual AutoScaledGradientOptimizer * | deepCopy (CopiesMap &copies) const |
virtual void | setToOptimize (const VarArray &the_params, Var the_cost, VarArray the_other_costs=VarArray(0), TVec< VarArray > the_other_params=TVec< VarArray >(0), real the_other_weight=1) |
virtual void | makeDeepCopyFromShallowCopy (CopiesMap &copies) |
Does the necessary operations to transform a shallow copy (this) into a deep copy by deep-copying all the members that need to be. | |
virtual void | build () |
Post-constructor. | |
virtual bool | optimizeN (VecStatsCollector &stats_coll) |
Main optimization method, to be defined in subclasses. | |
Static Public Member Functions | |
static string | _classname_ () |
static OptionList & | _getOptionList_ () |
static RemoteMethodMap & | _getRemoteMethodMap_ () |
static Object * | _new_instance_for_typemap_ () |
static bool | _isa_ (const Object *o) |
static void | _static_initialize_ () |
static const PPath & | declaringFile () |
Public Attributes | |
real | learning_rate |
gradient descent specific parameters (directly modifiable by the user) | |
real | start_learning_rate |
real | decrease_constant |
Mat | lr_schedule |
int | verbosity |
int | evaluate_scaling_every |
int | evaluate_scaling_during |
real | epsilon |
Static Public Attributes | |
static StaticInitializer | _static_initializer_ |
Static Protected Member Functions | |
static void | declareOptions (OptionList &ol) |
Declare options (data fields) for the class. | |
Protected Attributes | |
Vec | scaling |
Vec | meanabsgrad |
int | nsteps_remaining_for_evaluation |
Vec | param_values |
Vec | param_gradients |
Private Types | |
typedef Optimizer | inherited |
Private Member Functions | |
void | build_ () |
Object-specific post-constructor. |
Definition at line 55 of file AutoScaledGradientOptimizer.h.
typedef Optimizer PLearn::AutoScaledGradientOptimizer::inherited [private] |
Reimplemented from PLearn::Optimizer.
Definition at line 57 of file AutoScaledGradientOptimizer.h.
PLearn::AutoScaledGradientOptimizer::AutoScaledGradientOptimizer | ( | ) |
Definition at line 63 of file AutoScaledGradientOptimizer.cc.
: learning_rate(0.), start_learning_rate(1e-2), decrease_constant(0), verbosity(0), evaluate_scaling_every(1000), evaluate_scaling_during(1000), epsilon(1e-6), nsteps_remaining_for_evaluation(-1) {}
string PLearn::AutoScaledGradientOptimizer::_classname_ | ( | ) | [static] |
Reimplemented from PLearn::Optimizer.
Definition at line 61 of file AutoScaledGradientOptimizer.cc.
OptionList & PLearn::AutoScaledGradientOptimizer::_getOptionList_ | ( | ) | [static] |
Reimplemented from PLearn::Optimizer.
Definition at line 61 of file AutoScaledGradientOptimizer.cc.
RemoteMethodMap & PLearn::AutoScaledGradientOptimizer::_getRemoteMethodMap_ | ( | ) | [static] |
Reimplemented from PLearn::Optimizer.
Definition at line 61 of file AutoScaledGradientOptimizer.cc.
Reimplemented from PLearn::Optimizer.
Definition at line 61 of file AutoScaledGradientOptimizer.cc.
Object * PLearn::AutoScaledGradientOptimizer::_new_instance_for_typemap_ | ( | ) | [static] |
Reimplemented from PLearn::Object.
Definition at line 61 of file AutoScaledGradientOptimizer.cc.
StaticInitializer AutoScaledGradientOptimizer::_static_initializer_ & PLearn::AutoScaledGradientOptimizer::_static_initialize_ | ( | ) | [static] |
Reimplemented from PLearn::Optimizer.
Definition at line 61 of file AutoScaledGradientOptimizer.cc.
virtual void PLearn::AutoScaledGradientOptimizer::build | ( | ) | [inline, virtual] |
Post-constructor.
The normal implementation should call simply inherited::build(), then this class's build_(). This method should be callable again at later times, after modifying some option fields to change the "architecture" of the object.
Reimplemented from PLearn::Optimizer.
Definition at line 93 of file AutoScaledGradientOptimizer.h.
{ inherited::build(); build_(); }
void PLearn::AutoScaledGradientOptimizer::build_ | ( | ) | [inline, private] |
Object-specific post-constructor.
This method should be redefined in subclasses and do the actual building of the object according to previously set option fields. Constructors can just set option fields, and then call build_. This method is NOT virtual, and will typically be called only from three places: a constructor, the public virtual build()
method, and possibly the public virtual read method (which calls its parent's read). build_()
can assume that its parent's build_()
has already been called.
Reimplemented from PLearn::Optimizer.
Definition at line 109 of file AutoScaledGradientOptimizer.h.
{}
string PLearn::AutoScaledGradientOptimizer::classname | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 61 of file AutoScaledGradientOptimizer.cc.
void PLearn::AutoScaledGradientOptimizer::declareOptions | ( | OptionList & | ol | ) | [static, protected] |
Declare options (data fields) for the class.
Redefine this in subclasses: call declareOption
(...) for each option, and then call inherited::declareOptions(options)
. Please call the inherited
method AT THE END to get the options listed in a consistent order (from most recently defined to least recently defined).
static void MyDerivedClass::declareOptions(OptionList& ol) { declareOption(ol, "inputsize", &MyObject::inputsize_, OptionBase::buildoption, "The size of the input; it must be provided"); declareOption(ol, "weights", &MyObject::weights, OptionBase::learntoption, "The learned model weights"); inherited::declareOptions(ol); }
ol | List of options that is progressively being constructed for the current class. |
Reimplemented from PLearn::Optimizer.
Definition at line 75 of file AutoScaledGradientOptimizer.cc.
References PLearn::OptionBase::buildoption, PLearn::declareOption(), PLearn::Optimizer::declareOptions(), decrease_constant, epsilon, evaluate_scaling_during, evaluate_scaling_every, learning_rate, PLearn::OptionBase::learntoption, lr_schedule, start_learning_rate, and verbosity.
{ declareOption( ol, "start_learning_rate", &AutoScaledGradientOptimizer::start_learning_rate, OptionBase::buildoption, "The initial learning rate\n"); declareOption( ol, "learning_rate", &AutoScaledGradientOptimizer::learning_rate, OptionBase::learntoption, "The current learning rate\n"); declareOption( ol, "decrease_constant", &AutoScaledGradientOptimizer::decrease_constant, OptionBase::buildoption, "The learning rate decrease constant \n"); declareOption( ol, "lr_schedule", &AutoScaledGradientOptimizer::lr_schedule, OptionBase::buildoption, "Fixed schedule instead of decrease_constant. This matrix has 2 columns: iteration_threshold \n" "and learning_rate_factor. As soon as the iteration number goes above the iteration_threshold,\n" "the corresponding learning_rate_factor is applied (multiplied) to the start_learning_rate to\n" "obtain the learning_rate.\n"); declareOption( ol, "verbosity", &AutoScaledGradientOptimizer::verbosity, OptionBase::buildoption, "Controls the amount of output. If zero, does not print anything.\n" "If 'verbosity'=V, print the current cost and learning rate if\n" "\n" " stage % V == 0\n" "\n" "i.e. every V stages. (Default=0)\n"); declareOption( ol, "evaluate_scaling_every", &AutoScaledGradientOptimizer::evaluate_scaling_every, OptionBase::buildoption, "every how-many steps should the mean and scaling be reevaluated\n"); declareOption( ol, "evaluate_scaling_during", &AutoScaledGradientOptimizer::evaluate_scaling_during, OptionBase::buildoption, "how many steps should be used to re-evaluate the mean and scaling\n"); declareOption( ol, "epsilon", &AutoScaledGradientOptimizer::epsilon, OptionBase::buildoption, "scaling will be 1/(mean_abs_grad + epsilon)\n"); inherited::declareOptions(ol); }
static const PPath& PLearn::AutoScaledGradientOptimizer::declaringFile | ( | ) | [inline, static] |
Reimplemented from PLearn::Optimizer.
Definition at line 86 of file AutoScaledGradientOptimizer.h.
{ inherited::makeDeepCopyFromShallowCopy(copies); }
AutoScaledGradientOptimizer * PLearn::AutoScaledGradientOptimizer::deepCopy | ( | CopiesMap & | copies | ) | const [virtual] |
Reimplemented from PLearn::Optimizer.
Definition at line 61 of file AutoScaledGradientOptimizer.cc.
OptionList & PLearn::AutoScaledGradientOptimizer::getOptionList | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 61 of file AutoScaledGradientOptimizer.cc.
OptionMap & PLearn::AutoScaledGradientOptimizer::getOptionMap | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 61 of file AutoScaledGradientOptimizer.cc.
RemoteMethodMap & PLearn::AutoScaledGradientOptimizer::getRemoteMethodMap | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 61 of file AutoScaledGradientOptimizer.cc.
virtual void PLearn::AutoScaledGradientOptimizer::makeDeepCopyFromShallowCopy | ( | CopiesMap & | copies | ) | [inline, virtual] |
Does the necessary operations to transform a shallow copy (this) into a deep copy by deep-copying all the members that need to be.
This needs to be overridden by every class that adds "complex" data members to the class, such as Vec
, Mat
, PP<Something>
, etc. Typical implementation:
void CLASS_OF_THIS::makeDeepCopyFromShallowCopy(CopiesMap& copies) { inherited::makeDeepCopyFromShallowCopy(copies); deepCopyField(complex_data_member1, copies); deepCopyField(complex_data_member2, copies); ... }
copies | A map used by the deep-copy mechanism to keep track of already-copied objects. |
Reimplemented from PLearn::Optimizer.
Definition at line 90 of file AutoScaledGradientOptimizer.h.
{ inherited::makeDeepCopyFromShallowCopy(copies); }
bool PLearn::AutoScaledGradientOptimizer::optimizeN | ( | VecStatsCollector & | stats_coll | ) | [virtual] |
Main optimization method, to be defined in subclasses.
Return true iff no further optimization is possible.
Implements PLearn::Optimizer.
Definition at line 148 of file AutoScaledGradientOptimizer.cc.
References PLearn::TVec< T >::clear(), PLearn::VarArray::clearGradient(), PLearn::Optimizer::cost, PLearn::TVec< T >::data(), decrease_constant, PLearn::displayVarGraph(), PLearn::endl(), epsilon, evaluate_scaling_during, evaluate_scaling_every, PLearn::VarArray::fbprop(), PLearn::TVec< T >::fill(), PLearn::TVec< T >::hasMissing(), i, learning_rate, PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), lr_schedule, PLearn::max(), meanabsgrad, PLearn::min(), n, PLearn::Optimizer::nstages, nsteps_remaining_for_evaluation, PLearn::Optimizer::other_costs, param_gradients, param_values, PLearn::Optimizer::params, PLearn::perr, PLASSERT_MSG, PLERROR, PLearn::Optimizer::proppath, scaling, PLearn::TVec< T >::size(), PLearn::Optimizer::stage, start_learning_rate, PLearn::VecStatsCollector::update(), and verbosity.
{ PLASSERT_MSG(other_costs.length()==0, "gradient on other costs not currently supported"); param_gradients.clear(); int stage_max = stage + nstages; // the stage to reach int current_schedule = 0; int n_schedules = lr_schedule.length(); if (n_schedules>0) while (current_schedule+1 < n_schedules && stage > lr_schedule(current_schedule,0)) current_schedule++; while (stage < stage_max) { if (n_schedules>0) { while (current_schedule+1 < n_schedules && stage > lr_schedule(current_schedule,0)) current_schedule++; learning_rate = start_learning_rate * lr_schedule(current_schedule,1); } else learning_rate = start_learning_rate/(1.0+decrease_constant*stage); proppath.clearGradient(); cost->gradient[0] = 1.0; static bool display_var_graph_before_fbprop=false; if (display_var_graph_before_fbprop) displayVarGraph(proppath, true, 333); proppath.fbprop(); #ifdef BOUNDCHECK int np = params.size(); for(int i=0; i<np; i++) if (params[i]->value.hasMissing()) PLERROR("parameter updated with NaN"); #endif static bool display_var_graph=false; if (display_var_graph) displayVarGraph(proppath, true, 333); // // Debugging of negative NLL bug... // if (cost->value[0] <= 0) { // displayVarGraph(proppath, true, 333); // cerr << "Negative NLL cost vector = " << cost << endl; // PLERROR("Negative NLL encountered in optimization"); // } // set params += -learning_rate * params.gradient * scaling { real* p_val = param_values.data(); real* p_grad = param_gradients.data(); real* p_scale = scaling.data(); real neg_learning_rate = -learning_rate; int n = param_values.length(); while(n--) *p_val++ += neg_learning_rate*(*p_grad++)*(*p_scale++); } if(stage%evaluate_scaling_every==0) { nsteps_remaining_for_evaluation = evaluate_scaling_during; meanabsgrad.clear(); if(verbosity>=4) perr << "At stage " << stage << " beginning evaluating meanabsgrad during " << evaluate_scaling_during << " stages" << endl; } if(nsteps_remaining_for_evaluation>0) { real* p_grad = param_gradients.data(); real* p_mean = meanabsgrad.data(); int n = param_gradients.length(); while(n--) *p_mean++ += fabs(*p_grad++); --nsteps_remaining_for_evaluation; if(nsteps_remaining_for_evaluation==0) // finalize evaluation { int n = param_gradients.length(); for(int i=0; i<n; i++) { meanabsgrad[i] /= evaluate_scaling_during; scaling[i] = 1.0/(meanabsgrad[i]+epsilon); } if(verbosity>=4) perr << "At stage " << stage << " finished evaluating meanabsgrad. It's in range: ( " << min(meanabsgrad) << ", " << max(meanabsgrad) << " )" << endl; if(verbosity>=5) perr << meanabsgrad << endl; if(epsilon<0) scaling.fill(1.0); } } param_gradients.clear(); if (verbosity > 0 && stage % verbosity == 0) { MODULE_LOG << "Stage " << stage << ": " << cost->value << "\tlr=" << learning_rate << endl; } stats_coll.update(cost->value); ++stage; } return false; }
void PLearn::AutoScaledGradientOptimizer::setToOptimize | ( | const VarArray & | the_params, |
Var | the_cost, | ||
VarArray | the_other_costs = VarArray(0) , |
||
TVec< VarArray > | the_other_params = TVec<VarArray>(0) , |
||
real | the_other_weight = 1 |
||
) | [virtual] |
Reimplemented from PLearn::Optimizer.
Definition at line 129 of file AutoScaledGradientOptimizer.cc.
References PLearn::TVec< T >::clear(), epsilon, PLearn::TVec< T >::fill(), PLearn::VarArray::makeSharedGradient(), PLearn::VarArray::makeSharedValue(), meanabsgrad, n, PLearn::VarArray::nelems(), param_gradients, param_values, PLearn::Optimizer::params, PLearn::TVec< T >::resize(), scaling, and PLearn::Optimizer::setToOptimize().
{ inherited::setToOptimize(the_params, the_cost, the_other_costs, the_other_params, the_other_weight); int n = params.nelems(); param_values = Vec(n); param_gradients = Vec(n); params.makeSharedValue(param_values); params.makeSharedGradient(param_gradients); scaling.resize(n); scaling.clear(); if(epsilon<0) scaling.fill(1.0); meanabsgrad.resize(n); meanabsgrad.clear(); }
Reimplemented from PLearn::Optimizer.
Definition at line 86 of file AutoScaledGradientOptimizer.h.
Definition at line 67 of file AutoScaledGradientOptimizer.h.
Referenced by declareOptions(), and optimizeN().
Definition at line 82 of file AutoScaledGradientOptimizer.h.
Referenced by declareOptions(), optimizeN(), and setToOptimize().
Definition at line 80 of file AutoScaledGradientOptimizer.h.
Referenced by declareOptions(), and optimizeN().
Definition at line 78 of file AutoScaledGradientOptimizer.h.
Referenced by declareOptions(), and optimizeN().
gradient descent specific parameters (directly modifiable by the user)
Definition at line 63 of file AutoScaledGradientOptimizer.h.
Referenced by declareOptions(), and optimizeN().
Definition at line 73 of file AutoScaledGradientOptimizer.h.
Referenced by declareOptions(), and optimizeN().
Vec PLearn::AutoScaledGradientOptimizer::meanabsgrad [protected] |
Definition at line 101 of file AutoScaledGradientOptimizer.h.
Referenced by optimizeN(), and setToOptimize().
Definition at line 102 of file AutoScaledGradientOptimizer.h.
Referenced by optimizeN().
Definition at line 106 of file AutoScaledGradientOptimizer.h.
Referenced by optimizeN(), and setToOptimize().
Vec PLearn::AutoScaledGradientOptimizer::param_values [protected] |
Definition at line 105 of file AutoScaledGradientOptimizer.h.
Referenced by optimizeN(), and setToOptimize().
Vec PLearn::AutoScaledGradientOptimizer::scaling [protected] |
Definition at line 100 of file AutoScaledGradientOptimizer.h.
Referenced by optimizeN(), and setToOptimize().
Definition at line 66 of file AutoScaledGradientOptimizer.h.
Referenced by declareOptions(), and optimizeN().
Definition at line 75 of file AutoScaledGradientOptimizer.h.
Referenced by declareOptions(), and optimizeN().