PLearn 0.1
|
#include <ConjGradientOptimizer.h>
Public Member Functions | |
ConjGradientOptimizer () | |
virtual string | classname () const |
virtual OptionList & | getOptionList () const |
virtual OptionMap & | getOptionMap () const |
virtual RemoteMethodMap & | getRemoteMethodMap () const |
virtual ConjGradientOptimizer * | deepCopy (CopiesMap &copies) const |
virtual void | makeDeepCopyFromShallowCopy (CopiesMap &copies) |
Does the necessary operations to transform a shallow copy (this) into a deep copy by deep-copying all the members that need to be. | |
virtual void | build () |
Post-constructor. | |
virtual bool | optimizeN (VecStatsCollector &stat_coll) |
Main optimization method, to be defined in subclasses. | |
virtual void | reset () |
Static Public Member Functions | |
static string | _classname_ () |
static OptionList & | _getOptionList_ () |
static RemoteMethodMap & | _getRemoteMethodMap_ () |
static Object * | _new_instance_for_typemap_ () |
static bool | _isa_ (const Object *o) |
static void | _static_initialize_ () |
static const PPath & | declaringFile () |
Public Attributes | |
real | constrain_limit |
real | expected_red |
real | max_extrapolate |
real | rho |
real | sigma |
real | slope_ratio |
int | max_eval_per_line_search |
bool | no_negative_gamma |
int | verbosity |
int | minibatch_n_samples |
int | minibatch_n_line_searches |
Static Public Attributes | |
static StaticInitializer | _static_initializer_ |
Protected Member Functions | |
void | findDirection () |
Find the new search direction for the line search algorithm. | |
bool | lineSearch () |
Search the minimum in the current search direction. | |
void | updateSearchDirection (real gamma) |
Update the search_direction by search_direction = delta + gamma * search_direction 'delta' is supposed to be the opposite gradient at the point we have reached during the line search: 'current_opp_gradient' is also updated in this function (set equal to 'delta'). | |
real | polakRibiere () |
A Conjugate Gradient formula finds the new search direction, given the current gradient, the previous one, and the current search direction. | |
real | minimizeLineSearch () |
A line search algorithm moves 'params' to the value minimizing 'cost', when moving in the direction 'search_direction'. | |
real | computeCostValue (real alpha) |
Return cost->value() after an update of params with step size alpha in the current search direction, i.e: f(x) = cost(params + x*search_direction) in x = alpha. | |
real | computeDerivative (real alpha) |
Return the derivative of the function f(x) = cost(params + x*search_direction) in x = alpha. | |
void | computeCostAndDerivative (real alpha, real &cost, real &derivative) |
Static Protected Member Functions | |
static void | declareOptions (OptionList &ol) |
Declare options (data fields) for the class. | |
Protected Attributes | |
int | minibatch_curpos |
real | bracket_limit |
Bracket limit. | |
real | cubic_a |
Cubic interpolation coefficients. | |
real | cubic_b |
real | current_cost |
Current cost (=function) value. | |
real | fun_deriv2 |
Function derivative (w.r.t. to the step along the search direction). | |
real | fun_deriv1 |
real | fun_val1 |
Function values. | |
real | fun_val2 |
real | step1 |
Step values along the search direction, during line search. | |
real | step2 |
int | fun_eval_count |
Counter to make sure the number of function evaluations does not exceed the 'max_eval_per_line_search' option. | |
bool | line_search_failed |
Booleans indicating the line search outcome. | |
bool | line_search_succeeded |
Vec | current_opp_gradient |
Current opposite gradient value. | |
Vec | search_direction |
Current search direction for line search. | |
Vec | tmp_storage |
Temporary data storage. | |
Vec | delta |
Temporary storage of the gradient. | |
Private Types | |
typedef Optimizer | inherited |
Private Member Functions | |
void | build_ () |
Object-specific post-constructor. |
Definition at line 51 of file ConjGradientOptimizer.h.
typedef Optimizer PLearn::ConjGradientOptimizer::inherited [private] |
Reimplemented from PLearn::Optimizer.
Definition at line 53 of file ConjGradientOptimizer.h.
PLearn::ConjGradientOptimizer::ConjGradientOptimizer | ( | ) |
Definition at line 54 of file ConjGradientOptimizer.cc.
: constrain_limit(0.1), expected_red(1), max_extrapolate(3), rho(1e-2), sigma(0.5), slope_ratio(100), max_eval_per_line_search(20), no_negative_gamma(true), verbosity(0), minibatch_n_samples(0), minibatch_n_line_searches(3), minibatch_curpos(0), line_search_failed(false), line_search_succeeded(false) { }
string PLearn::ConjGradientOptimizer::_classname_ | ( | ) | [static] |
Reimplemented from PLearn::Optimizer.
Definition at line 110 of file ConjGradientOptimizer.cc.
OptionList & PLearn::ConjGradientOptimizer::_getOptionList_ | ( | ) | [static] |
Reimplemented from PLearn::Optimizer.
Definition at line 110 of file ConjGradientOptimizer.cc.
RemoteMethodMap & PLearn::ConjGradientOptimizer::_getRemoteMethodMap_ | ( | ) | [static] |
Reimplemented from PLearn::Optimizer.
Definition at line 110 of file ConjGradientOptimizer.cc.
Reimplemented from PLearn::Optimizer.
Definition at line 110 of file ConjGradientOptimizer.cc.
Object * PLearn::ConjGradientOptimizer::_new_instance_for_typemap_ | ( | ) | [static] |
Reimplemented from PLearn::Object.
Definition at line 110 of file ConjGradientOptimizer.cc.
StaticInitializer ConjGradientOptimizer::_static_initializer_ & PLearn::ConjGradientOptimizer::_static_initialize_ | ( | ) | [static] |
Reimplemented from PLearn::Optimizer.
Definition at line 110 of file ConjGradientOptimizer.cc.
virtual void PLearn::ConjGradientOptimizer::build | ( | ) | [inline, virtual] |
Post-constructor.
The normal implementation should call simply inherited::build(), then this class's build_(). This method should be callable again at later times, after modifying some option fields to change the "architecture" of the object.
Reimplemented from PLearn::Optimizer.
Definition at line 114 of file ConjGradientOptimizer.h.
{ inherited::build(); build_(); }
void PLearn::ConjGradientOptimizer::build_ | ( | ) | [private] |
Object-specific post-constructor.
This method should be redefined in subclasses and do the actual building of the object according to previously set option fields. Constructors can just set option fields, and then call build_. This method is NOT virtual, and will typically be called only from three places: a constructor, the public virtual build()
method, and possibly the public virtual read method (which calls its parent's read). build_()
can assume that its parent's build_()
has already been called.
Reimplemented from PLearn::Optimizer.
Definition at line 204 of file ConjGradientOptimizer.cc.
References current_opp_gradient, delta, n, PLearn::VarArray::nelems(), PLearn::Optimizer::params, PLearn::TVec< T >::resize(), search_direction, and tmp_storage.
{ // Make sure the internal data have the right size. int n = params.nelems(); current_opp_gradient.resize(n); search_direction.resize(n); tmp_storage.resize(n); delta.resize(n); }
string PLearn::ConjGradientOptimizer::classname | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 110 of file ConjGradientOptimizer.cc.
void PLearn::ConjGradientOptimizer::computeCostAndDerivative | ( | real | alpha, |
real & | cost, | ||
real & | derivative | ||
) | [protected] |
Definition at line 216 of file ConjGradientOptimizer.cc.
References PLearn::Optimizer::computeGradient(), PLearn::VarArray::copyFrom(), PLearn::VarArray::copyTo(), current_cost, current_opp_gradient, delta, PLearn::dot(), PLearn::endl(), PLearn::fast_exact_is_equal(), PLearn::VarArray::nelems(), PLearn::Optimizer::params, PLearn::perr, search_direction, tmp_storage, and PLearn::VarArray::update().
Referenced by minimizeLineSearch().
{ if (fast_exact_is_equal(alpha, 0)) { cost = this->current_cost; derivative = -dot(this->search_direction, this->current_opp_gradient); } else { this->params.copyTo(this->tmp_storage); this->params.update(alpha, this->search_direction); computeGradient(this->delta); cost = this->cost->value[0]; #if 0 Vec tmpparams(this->params.nelems()); this->params >> tmpparams; perr << "Params: " << tmpparams << " Cost: " << cost << endl; #endif derivative = dot(this->search_direction, this->delta); this->params.copyFrom(this->tmp_storage); } }
Return cost->value() after an update of params with step size alpha in the current search direction, i.e: f(x) = cost(params + x*search_direction) in x = alpha.
The parameters' values are not modified by this function.
Definition at line 241 of file ConjGradientOptimizer.cc.
References c, PLearn::VarArray::copyFrom(), PLearn::VarArray::copyTo(), PLearn::Optimizer::cost, current_cost, PLearn::endl(), PLearn::fast_exact_is_equal(), PLearn::VarArray::fprop(), PLearn::VarArray::nelems(), PLearn::Optimizer::params, PLearn::perr, PLearn::Optimizer::proppath, search_direction, tmp_storage, and PLearn::VarArray::update().
{ if (fast_exact_is_equal(alpha, 0)) return this->current_cost; this->params.copyTo(this->tmp_storage); this->params.update(alpha, this->search_direction); this->proppath.fprop(); real c = this->cost->value[0]; #if 0 Vec tmpparams(this->params.nelems()); this->params >> tmpparams; perr << "Params: " << tmpparams << " Cost: " << c << endl; #endif this->params.copyFrom(this->tmp_storage); return c; }
Return the derivative of the function f(x) = cost(params + x*search_direction) in x = alpha.
The parameters' values are not modified by this function (however, the gradients are altered).
Definition at line 263 of file ConjGradientOptimizer.cc.
References PLearn::Optimizer::computeGradient(), PLearn::VarArray::copyFrom(), PLearn::VarArray::copyTo(), PLearn::Optimizer::cost, current_opp_gradient, delta, PLearn::dot(), PLearn::endl(), PLearn::fast_exact_is_equal(), PLearn::VarArray::nelems(), PLearn::Optimizer::params, PLearn::perr, search_direction, tmp_storage, and PLearn::VarArray::update().
{ if (fast_exact_is_equal(alpha, 0)) return -dot(this->search_direction, this->current_opp_gradient); this->params.copyTo(this->tmp_storage); this->params.update(alpha, this->search_direction); computeGradient(this->delta); #if 0 Vec tmpparams(this->params.nelems()); this->params >> tmpparams; perr << "Params: " << tmpparams << " Cost: " << this->cost->value[0] << endl; #endif this->params.copyFrom(this->tmp_storage); return dot(this->search_direction, this->delta); }
void PLearn::ConjGradientOptimizer::declareOptions | ( | OptionList & | ol | ) | [static, protected] |
Declare options (data fields) for the class.
Redefine this in subclasses: call declareOption
(...) for each option, and then call inherited::declareOptions(options)
. Please call the inherited
method AT THE END to get the options listed in a consistent order (from most recently defined to least recently defined).
static void MyDerivedClass::declareOptions(OptionList& ol) { declareOption(ol, "inputsize", &MyObject::inputsize_, OptionBase::buildoption, "The size of the input; it must be provided"); declareOption(ol, "weights", &MyObject::weights, OptionBase::learntoption, "The learned model weights"); inherited::declareOptions(ol); }
ol | List of options that is progressively being constructed for the current class. |
Reimplemented from PLearn::Optimizer.
Definition at line 115 of file ConjGradientOptimizer.cc.
References PLearn::OptionBase::buildoption, constrain_limit, PLearn::declareOption(), PLearn::Optimizer::declareOptions(), expected_red, max_eval_per_line_search, max_extrapolate, minibatch_n_line_searches, minibatch_n_samples, no_negative_gamma, rho, sigma, slope_ratio, and verbosity.
{ declareOption( ol, "verbosity", &ConjGradientOptimizer::verbosity, OptionBase::buildoption, "Controls the amount of output. If zero, does not print anything.\n" "If 'verbosity'=V, print the current cost if\n" "\n" " stage % V == 0\n" "\n" "i.e. every V stages. (Default=0)\n"); declareOption( ol, "expected_red", &ConjGradientOptimizer::expected_red, OptionBase::buildoption, "Expected function reduction at first step."); declareOption( ol, "no_negative_gamma", &ConjGradientOptimizer::no_negative_gamma, OptionBase::buildoption, "If true, then a negative value for gamma in the Polak-Ribiere\n" "formula will trigger a restart."); declareOption( ol, "sigma", &ConjGradientOptimizer::sigma, OptionBase::buildoption, "Constant in the Wolfe-Powell stopping conditions. It is the maximum allowed\n" "absolute ratio between previous and new slopes (derivatives in the search\n" "direction), thus setting sigma to low (positive) values forces higher\n" "precision in the line-searches.\n" "Tuning of sigma (depending on the nature of the function to be optimized) may\n" "may speed up the minimization."); declareOption( ol, "rho", &ConjGradientOptimizer::rho, OptionBase::buildoption, "Constant in the Wolfe-Powell stopping conditions.\n" "Rho is the minimum allowed fraction of the expected (from the slope at the\n" "initial point in the linesearch). Constants must satisfy 0 < rho < sigma < 1.\n" "It is probably not worth playing much with rho.\n"); declareOption( ol, "constrain_limit", &ConjGradientOptimizer::constrain_limit, OptionBase::buildoption, "Multiplicative coefficient to constrain the evaluation bracket.\n" "We don't re-evaluate the function if we are within 'constrain_limit'\n" "of the current bracket."); declareOption( ol, "max_extrapolate", &ConjGradientOptimizer::max_extrapolate, OptionBase::buildoption, "Maximum coefficient for bracket extrapolation. This limits the\n" "extrapolation to be within 'max_extrapolate' times the current step-size"); declareOption( ol, "max_eval_per_line_search", &ConjGradientOptimizer::max_eval_per_line_search, OptionBase::buildoption, "Maximum number of function evalutions during line search."); declareOption( ol, "slope_ratio", &ConjGradientOptimizer::slope_ratio, OptionBase::buildoption, "Maximum slope ratio."); declareOption( ol, "minibatch_n_samples", &ConjGradientOptimizer::minibatch_n_samples, OptionBase::buildoption, "If >0 we'll do minibatch. In minibatch mode, weight updates are based on \n" "cost and gradients computed on a subset of the whole training set, made \n" "of minibatch_n_samples consecutive samples. Each such subset will be used \n" "to perform minibatch_n_line_searches line searches before moving to the \n" "next minibatch subset.\n"); declareOption( ol, "minibatch_n_line_searches", &ConjGradientOptimizer::minibatch_n_line_searches, OptionBase::buildoption, "How many line searches to perform with each minibatch subset."); inherited::declareOptions(ol); }
static const PPath& PLearn::ConjGradientOptimizer::declaringFile | ( | ) | [inline, static] |
ConjGradientOptimizer * PLearn::ConjGradientOptimizer::deepCopy | ( | CopiesMap & | copies | ) | const [virtual] |
Reimplemented from PLearn::Optimizer.
Definition at line 110 of file ConjGradientOptimizer.cc.
void PLearn::ConjGradientOptimizer::findDirection | ( | ) | [protected] |
Find the new search direction for the line search algorithm.
Definition at line 284 of file ConjGradientOptimizer.cc.
References PLearn::endl(), no_negative_gamma, polakRibiere(), updateSearchDirection(), and verbosity.
Referenced by optimizeN().
{ real gamma = polakRibiere(); if (gamma < 0 && no_negative_gamma) { if (verbosity > 0) MODULE_LOG << "gamma = " << gamma << " < 0 ==> Restarting" << endl; gamma = 0; } /* // Old code triggering restart. else { real dp = dot(delta, current_opp_gradient); real delta_n = pownorm(delta); if (abs(dp) > restart_coeff *delta_n ) { if (verbosity >= 5) pout << "Restart triggered !" << endl; gamma = 0; } } */ updateSearchDirection(gamma); }
OptionList & PLearn::ConjGradientOptimizer::getOptionList | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 110 of file ConjGradientOptimizer.cc.
OptionMap & PLearn::ConjGradientOptimizer::getOptionMap | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 110 of file ConjGradientOptimizer.cc.
RemoteMethodMap & PLearn::ConjGradientOptimizer::getRemoteMethodMap | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 110 of file ConjGradientOptimizer.cc.
bool PLearn::ConjGradientOptimizer::lineSearch | ( | ) | [protected] |
Search the minimum in the current search direction.
Return false iff no improvement was possible (and we can stop here).
Definition at line 435 of file ConjGradientOptimizer.cc.
References PLearn::endl(), PLearn::fast_exact_is_equal(), minimizeLineSearch(), PLearn::Optimizer::params, PLWARNING, search_direction, PLearn::VarArray::update(), and verbosity.
Referenced by optimizeN().
{ real step = minimizeLineSearch(); if (step < 0) // Hopefully this will not happen. PLWARNING("Negative step!"); bool no_improvement_possible = fast_exact_is_equal(step, 0); if (no_improvement_possible) { if (verbosity > 0) MODULE_LOG << "No more progress made by the line search, stopping" << endl; } else params.update(step, search_direction); return !no_improvement_possible; }
void PLearn::ConjGradientOptimizer::makeDeepCopyFromShallowCopy | ( | CopiesMap & | copies | ) | [virtual] |
Does the necessary operations to transform a shallow copy (this) into a deep copy by deep-copying all the members that need to be.
This needs to be overridden by every class that adds "complex" data members to the class, such as Vec
, Mat
, PP<Something>
, etc. Typical implementation:
void CLASS_OF_THIS::makeDeepCopyFromShallowCopy(CopiesMap& copies) { inherited::makeDeepCopyFromShallowCopy(copies); deepCopyField(complex_data_member1, copies); deepCopyField(complex_data_member2, copies); ... }
copies | A map used by the deep-copy mechanism to keep track of already-copied objects. |
Reimplemented from PLearn::Optimizer.
Definition at line 452 of file ConjGradientOptimizer.cc.
References current_opp_gradient, PLearn::deepCopyField(), delta, PLearn::Optimizer::makeDeepCopyFromShallowCopy(), search_direction, and tmp_storage.
{ inherited::makeDeepCopyFromShallowCopy(copies); deepCopyField(current_opp_gradient, copies); deepCopyField(search_direction, copies); deepCopyField(tmp_storage, copies); deepCopyField(delta, copies); }
real PLearn::ConjGradientOptimizer::minimizeLineSearch | ( | ) | [protected] |
A line search algorithm moves 'params' to the value minimizing 'cost', when moving in the direction 'search_direction'.
It must not update 'current_opp_gradient' (this is done later in updateSearchDirection(..)). It returns the optimal step found to minimize the gradient. The following line search algorithm is inspired by Carl Rasmussen's 'minimize' Matlab algorithm.
Definition at line 309 of file ConjGradientOptimizer.cc.
References bracket_limit, computeCostAndDerivative(), constrain_limit, cubic_a, cubic_b, current_opp_gradient, fun_deriv1, fun_deriv2, fun_eval_count, fun_val1, fun_val2, line_search_failed, line_search_succeeded, PLearn::max(), max_eval_per_line_search, max_extrapolate, PLearn::min(), PLearn::pownorm(), rho, sigma, PLearn::sqrt(), step1, and step2.
Referenced by lineSearch().
{ // We may need to perform two iterations of line search if the first one // fails. bool try_again = true; while (try_again) { try_again = false; real fun_val0 = fun_val1; computeCostAndDerivative(step1, fun_val2, fun_deriv2); real fun_val3 = fun_val1; real fun_deriv3 = fun_deriv1; real step3 = - step1; fun_eval_count = max_eval_per_line_search; line_search_succeeded = false; bracket_limit = -1; while (true) { while ( (fun_val2 > fun_val1 + step1 * rho * fun_deriv1 || fun_deriv2 > - sigma * fun_deriv1 ) && fun_eval_count > 0 ) { // Tighten bracket. bracket_limit = step1; if (fun_val2 > fun_val1) { // Quadratic fit. step2 = step3 - (0.5*fun_deriv3*step3*step3) / (fun_deriv3*step3+fun_val2-fun_val3); } else { // Cubic fit. cubic_a = 6*(fun_val2-fun_val3)/step3 + 3*(fun_deriv2+fun_deriv3); cubic_b = 3*(fun_val3-fun_val2) - step3*(fun_deriv3+2*fun_deriv2); step2 = (sqrt(cubic_b*cubic_b-cubic_a*fun_deriv2*step3*step3) - cubic_b) / cubic_a; } if (isnan(step2) || isinf(step2)) // Shit happens => bisection. step2 = step3/2; // Constrained range. step2 = max(min(step2, constrain_limit*step3), (1-constrain_limit)*step3); // Increase step and update function value and derivative. step1 += step2; computeCostAndDerivative(step1, fun_val2, fun_deriv2); // Update point 3. step3 = step3 - step2; fun_eval_count--; } if (fun_val2 > fun_val1+step1*rho*fun_deriv1 || fun_deriv2 > -sigma*fun_deriv1) // Failure. break; else if (fun_deriv2 > sigma * fun_deriv1) { // Sucesss. line_search_succeeded = true; break; } else if (fun_eval_count == 0) // Failure. break; // Cubic fit. cubic_a = 6*(fun_val2-fun_val3)/step3+3*(fun_deriv2+fun_deriv3); cubic_b = 3*(fun_val3-fun_val2)-step3*(fun_deriv3+2*fun_deriv2); step2 = -fun_deriv2*step3*step3 / (cubic_b + sqrt(cubic_b*cubic_b-cubic_a*fun_deriv2*step3*step3)); if (isnan(step2) || isinf(step2) || step2 < 0) { // Numerical issue, or wrong sign. if (bracket_limit < -0.5) // No upper limit. step2 = step1 * (max_extrapolate - 1); else step2 = (bracket_limit - step1) / 2; } else if (bracket_limit > -0.5 && (step2 + step1 > bracket_limit)) // Extrapolation beyond maximum. step2 = (bracket_limit - step1) / 2; else if (bracket_limit < -0.5 && step2+step1 > step1 * max_extrapolate) { // Extrapolation beyond limit. step2 = step1 * (max_extrapolate - 1); } else if (step2 < - step3 * constrain_limit) { step2 = - step3 * constrain_limit; // % too close to limit? } else if (bracket_limit > -0.5 && step2 < (bracket_limit - step1) * (1 - constrain_limit)) // Too close to limit. step2 = (bracket_limit - step1) * (1 - constrain_limit); // Point 3 = point 2. fun_val3 = fun_val2; fun_deriv3 = fun_deriv2; step3 = - step2; // Update step and function value and derivative. step1 += step2; computeCostAndDerivative(step1, fun_val2, fun_deriv2); fun_eval_count--; } if (line_search_succeeded) { fun_val1 = fun_val2; line_search_failed = false; } else { // Come back to initial point. fun_val1 = fun_val0; // If it is the second time it fails, then we cannot do better. if (line_search_failed) return 0; // Original code: // tmp = df1; df1 = df2; df2 = tmp; % swap derivatives // s = -df1; % try steepest // d1 = -s'*s; // We do not do that... it looks weird! // We will actually do s = -df0 as this seems more logical. // TODO See Carl Rasmussen's answer to email... fun_deriv1 = - pownorm(current_opp_gradient); step1 = 1 / (1 - fun_deriv1); line_search_failed = true; try_again = true; } } return step1; }
bool PLearn::ConjGradientOptimizer::optimizeN | ( | VecStatsCollector & | stats_coll | ) | [virtual] |
Main optimization method, to be defined in subclasses.
Return true iff no further optimization is possible.
Implements PLearn::Optimizer.
Definition at line 464 of file ConjGradientOptimizer.cc.
References PLearn::Optimizer::computeOppositeGradient(), PLearn::Optimizer::cost, current_cost, current_opp_gradient, delta, PLearn::Optimizer::early_stop, PLearn::endl(), expected_red, findDirection(), PLearn::VarArray::fprop(), fun_deriv1, fun_val1, PLearn::SumOfVariable::getDataSet(), PLearn::VMat::length(), lineSearch(), minibatch_curpos, minibatch_n_line_searches, minibatch_n_samples, PLearn::Optimizer::nstages, PLWARNING, PLearn::pownorm(), PLearn::Optimizer::proppath, search_direction, PLearn::SumOfVariable::setSampleRange(), PLearn::Optimizer::stage, step1, PLearn::VecStatsCollector::update(), and verbosity.
{ int stage_max = stage + nstages; // The stage to reach. SumOfVariable* sumofvar = 0; int trainsetlength = -1; int minibatch_n_line_searches_left = minibatch_n_line_searches; if(minibatch_n_samples>0) { sumofvar = dynamic_cast<SumOfVariable*>((Variable*)cost); if(sumofvar) { trainsetlength = sumofvar->getDataSet()->length(); sumofvar->setSampleRange(minibatch_curpos, minibatch_n_samples, true); } else { PLWARNING("In ConjGradientOptimizer, minibatch_n_samples>0 but can't " "do minibatch since cost does not seem to be a SumOfVariable " " (the only type of variable for which minibatch is supported)"); } } if (stage == 0) { computeOppositeGradient(current_opp_gradient); // First search direction = - gradient. search_direction << current_opp_gradient; current_cost = cost->value[0]; fun_val1 = current_cost; fun_deriv1 = - pownorm(search_direction); step1 = expected_red / ( 1 - fun_deriv1 ); } if (early_stop) { // The 'early_stop' flag is already set: we must still update the stats // collector with the current cost value. this->proppath.fprop(); stats_coll.update(cost->value); } for (; !early_stop && stage<stage_max; stage++) { if(sumofvar && minibatch_n_line_searches_left==0) { minibatch_curpos = (minibatch_curpos+minibatch_n_samples)%trainsetlength; sumofvar->setSampleRange(minibatch_curpos, minibatch_n_samples, true); minibatch_n_line_searches_left = minibatch_n_line_searches; } // Make a line search along the current search direction. early_stop = !lineSearch(); if(sumofvar) // we're doing minibatch --minibatch_n_line_searches_left; // Ensure 'delta' contains the opposite gradient at the new point // reached after the line search. // Also update 'current_cost'. computeOppositeGradient(delta); current_cost = cost->value[0]; // Display current cost value if required. if (verbosity > 0 && stage % verbosity == 0) MODULE_LOG << "Stage " << stage << ": " << current_cost << endl; stats_coll.update(cost->value); // Find the new search direction if we need to continue. if (!early_stop) findDirection(); } if (early_stop && verbosity > 0) MODULE_LOG << "Early stopping at stage " << stage << "; current-cost=" << current_cost << endl; return early_stop; }
real PLearn::ConjGradientOptimizer::polakRibiere | ( | ) | [protected] |
A Conjugate Gradient formula finds the new search direction, given the current gradient, the previous one, and the current search direction.
It returns a constant gamma, which will be used in : search(t) = -gradient(t) + gamma * search(t-1) The Polak-Ribiere formula is: gamma = dot(gradient(t), gradient(t)-gradient(t-1)) / ||gradient(t-1)||^2
Definition at line 547 of file ConjGradientOptimizer.cc.
References current_opp_gradient, delta, PLearn::dot(), PLearn::pownorm(), and tmp_storage.
Referenced by findDirection().
{ real normg = pownorm(this->current_opp_gradient); // At this point, delta = opposite gradient at new point. this->tmp_storage << this->delta; this->tmp_storage -= this->current_opp_gradient; return dot(this->tmp_storage, this->delta) / normg; }
void PLearn::ConjGradientOptimizer::reset | ( | ) | [virtual] |
Reimplemented from PLearn::Optimizer.
Definition at line 559 of file ConjGradientOptimizer.cc.
References line_search_failed, line_search_succeeded, minibatch_curpos, and PLearn::Optimizer::reset().
{ inherited::reset(); line_search_failed = false; line_search_succeeded = false; minibatch_curpos = 0; }
void PLearn::ConjGradientOptimizer::updateSearchDirection | ( | real | gamma | ) | [protected] |
Update the search_direction by search_direction = delta + gamma * search_direction 'delta' is supposed to be the opposite gradient at the point we have reached during the line search: 'current_opp_gradient' is also updated in this function (set equal to 'delta').
Definition at line 569 of file ConjGradientOptimizer.cc.
References current_opp_gradient, delta, PLearn::dot(), PLearn::fast_exact_is_equal(), fun_deriv1, fun_deriv2, i, PLearn::TVec< T >::length(), PLearn::min(), PLearn::pownorm(), search_direction, slope_ratio, and step1.
Referenced by findDirection().
{ if (fast_exact_is_equal(gamma, 0)) search_direction << delta; else for (int i=0; i<search_direction.length(); i++) search_direction[i] = delta[i] + gamma * search_direction[i]; // Update 'current_opp_gradient' for the new current point. current_opp_gradient << delta; fun_deriv2 = - dot(current_opp_gradient, search_direction); if (fun_deriv2 > 0) { search_direction << current_opp_gradient; fun_deriv2 = - pownorm(search_direction); } step1 = step1 * min(slope_ratio, fun_deriv1/(fun_deriv2-REAL_EPSILON)); fun_deriv1 = fun_deriv2; }
Reimplemented from PLearn::Optimizer.
Definition at line 110 of file ConjGradientOptimizer.h.
real PLearn::ConjGradientOptimizer::bracket_limit [protected] |
Bracket limit.
Definition at line 77 of file ConjGradientOptimizer.h.
Referenced by minimizeLineSearch().
Definition at line 59 of file ConjGradientOptimizer.h.
Referenced by declareOptions(), and minimizeLineSearch().
real PLearn::ConjGradientOptimizer::cubic_a [protected] |
Cubic interpolation coefficients.
Definition at line 80 of file ConjGradientOptimizer.h.
Referenced by minimizeLineSearch().
real PLearn::ConjGradientOptimizer::cubic_b [protected] |
Definition at line 80 of file ConjGradientOptimizer.h.
Referenced by minimizeLineSearch().
real PLearn::ConjGradientOptimizer::current_cost [protected] |
Current cost (=function) value.
Definition at line 83 of file ConjGradientOptimizer.h.
Referenced by computeCostAndDerivative(), computeCostValue(), and optimizeN().
Current opposite gradient value.
Definition at line 101 of file ConjGradientOptimizer.h.
Referenced by build_(), computeCostAndDerivative(), computeDerivative(), makeDeepCopyFromShallowCopy(), minimizeLineSearch(), optimizeN(), polakRibiere(), and updateSearchDirection().
Vec PLearn::ConjGradientOptimizer::delta [protected] |
Temporary storage of the gradient.
Definition at line 104 of file ConjGradientOptimizer.h.
Referenced by build_(), computeCostAndDerivative(), computeDerivative(), makeDeepCopyFromShallowCopy(), optimizeN(), polakRibiere(), and updateSearchDirection().
Definition at line 60 of file ConjGradientOptimizer.h.
Referenced by declareOptions(), and optimizeN().
real PLearn::ConjGradientOptimizer::fun_deriv1 [protected] |
Definition at line 86 of file ConjGradientOptimizer.h.
Referenced by minimizeLineSearch(), optimizeN(), and updateSearchDirection().
real PLearn::ConjGradientOptimizer::fun_deriv2 [protected] |
Function derivative (w.r.t. to the step along the search direction).
Definition at line 86 of file ConjGradientOptimizer.h.
Referenced by minimizeLineSearch(), and updateSearchDirection().
int PLearn::ConjGradientOptimizer::fun_eval_count [protected] |
Counter to make sure the number of function evaluations does not exceed the 'max_eval_per_line_search' option.
Definition at line 96 of file ConjGradientOptimizer.h.
Referenced by minimizeLineSearch().
real PLearn::ConjGradientOptimizer::fun_val1 [protected] |
Function values.
Definition at line 89 of file ConjGradientOptimizer.h.
Referenced by minimizeLineSearch(), and optimizeN().
real PLearn::ConjGradientOptimizer::fun_val2 [protected] |
Definition at line 89 of file ConjGradientOptimizer.h.
Referenced by minimizeLineSearch().
Booleans indicating the line search outcome.
Definition at line 99 of file ConjGradientOptimizer.h.
Referenced by minimizeLineSearch(), and reset().
Definition at line 99 of file ConjGradientOptimizer.h.
Referenced by minimizeLineSearch(), and reset().
Definition at line 65 of file ConjGradientOptimizer.h.
Referenced by declareOptions(), and minimizeLineSearch().
Definition at line 61 of file ConjGradientOptimizer.h.
Referenced by declareOptions(), and minimizeLineSearch().
int PLearn::ConjGradientOptimizer::minibatch_curpos [protected] |
Definition at line 74 of file ConjGradientOptimizer.h.
Referenced by optimizeN(), and reset().
Definition at line 70 of file ConjGradientOptimizer.h.
Referenced by declareOptions(), and optimizeN().
Definition at line 69 of file ConjGradientOptimizer.h.
Referenced by declareOptions(), and optimizeN().
Definition at line 66 of file ConjGradientOptimizer.h.
Referenced by declareOptions(), and findDirection().
Definition at line 62 of file ConjGradientOptimizer.h.
Referenced by declareOptions(), and minimizeLineSearch().
Vec PLearn::ConjGradientOptimizer::search_direction [protected] |
Current search direction for line search.
Definition at line 102 of file ConjGradientOptimizer.h.
Referenced by build_(), computeCostAndDerivative(), computeCostValue(), computeDerivative(), lineSearch(), makeDeepCopyFromShallowCopy(), optimizeN(), and updateSearchDirection().
Definition at line 63 of file ConjGradientOptimizer.h.
Referenced by declareOptions(), and minimizeLineSearch().
Definition at line 64 of file ConjGradientOptimizer.h.
Referenced by declareOptions(), and updateSearchDirection().
real PLearn::ConjGradientOptimizer::step1 [protected] |
Step values along the search direction, during line search.
Definition at line 92 of file ConjGradientOptimizer.h.
Referenced by minimizeLineSearch(), optimizeN(), and updateSearchDirection().
real PLearn::ConjGradientOptimizer::step2 [protected] |
Definition at line 92 of file ConjGradientOptimizer.h.
Referenced by minimizeLineSearch().
Vec PLearn::ConjGradientOptimizer::tmp_storage [protected] |
Temporary data storage.
Definition at line 103 of file ConjGradientOptimizer.h.
Referenced by build_(), computeCostAndDerivative(), computeCostValue(), computeDerivative(), makeDeepCopyFromShallowCopy(), and polakRibiere().
Definition at line 67 of file ConjGradientOptimizer.h.
Referenced by declareOptions(), findDirection(), lineSearch(), and optimizeN().