PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Static Protected Member Functions | Private Types | Private Member Functions | Private Attributes
PLearn::AdaptGradientOptimizer Class Reference

#include <AdaptGradientOptimizer.h>

Inheritance diagram for PLearn::AdaptGradientOptimizer:
Inheritance graph
[legend]
Collaboration diagram for PLearn::AdaptGradientOptimizer:
Collaboration graph
[legend]

List of all members.

Public Member Functions

 AdaptGradientOptimizer ()
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual AdaptGradientOptimizerdeepCopy (CopiesMap &copies) const
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Does the necessary operations to transform a shallow copy (this) into a deep copy by deep-copying all the members that need to be.
virtual void build ()
 Post-constructor.
virtual real optimize ()
virtual bool optimizeN (VecStatsCollector &stats_coll)
 Main optimization method, to be defined in subclasses.

Static Public Member Functions

static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

int adapt_every
 after how many updates we adapt learning rate
real adapt_coeff1
 a coefficient for learning rate adaptation
real adapt_coeff2
 a coefficient for learning rate adaptation
real decrease_constant
real learning_rate
 gradient descent specific parameters (directly modifiable by the user)
int learning_rate_adaptation
 Learning rate adaptation kind : 0 : none 1 : basic 2 : ALAP1 3 : variance.
real max_learning_rate
 max value for learning_rate when adapting
real min_learning_rate
 min value for learning_rate when adapting
int mini_batch
real start_learning_rate
 initial learning rate

Static Public Attributes

static StaticInitializer _static_initializer_

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declare options (data fields) for the class.

Private Types

typedef Optimizer inherited

Private Member Functions

void build_ ()
 Object-specific post-constructor.
void adaptLearningRateBasic (Vec old_params, Vec new_evol)
void adaptLearningRateALAP1 (Vec old_gradient, Vec new_gradient)
void adaptLearningRateVariance ()

Private Attributes

bool stochastic_hack
Vec learning_rates
Vec gradient
Vec tmp_storage
Vec old_evol
Array< Matoldgradientlocations
Vec store_var_grad
Vec store_grad
Vec store_quad_grad
int count_updates

Detailed Description

CLASS ADAPTGRADIENTOPTIMIZER

A (possibly stochastic) gradient optimizer using various learning rate adaptation methods.

Definition at line 64 of file AdaptGradientOptimizer.h.


Member Typedef Documentation

Reimplemented from PLearn::Optimizer.

Definition at line 66 of file AdaptGradientOptimizer.h.


Constructor & Destructor Documentation

PLearn::AdaptGradientOptimizer::AdaptGradientOptimizer ( )

Member Function Documentation

string PLearn::AdaptGradientOptimizer::_classname_ ( ) [static]

Reimplemented from PLearn::Optimizer.

Definition at line 150 of file AdaptGradientOptimizer.cc.

OptionList & PLearn::AdaptGradientOptimizer::_getOptionList_ ( ) [static]

Reimplemented from PLearn::Optimizer.

Definition at line 150 of file AdaptGradientOptimizer.cc.

RemoteMethodMap & PLearn::AdaptGradientOptimizer::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::Optimizer.

Definition at line 150 of file AdaptGradientOptimizer.cc.

bool PLearn::AdaptGradientOptimizer::_isa_ ( const Object o) [static]

Reimplemented from PLearn::Optimizer.

Definition at line 150 of file AdaptGradientOptimizer.cc.

Object * PLearn::AdaptGradientOptimizer::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::Object.

Definition at line 150 of file AdaptGradientOptimizer.cc.

StaticInitializer AdaptGradientOptimizer::_static_initializer_ & PLearn::AdaptGradientOptimizer::_static_initialize_ ( ) [static]

Reimplemented from PLearn::Optimizer.

Definition at line 150 of file AdaptGradientOptimizer.cc.

void PLearn::AdaptGradientOptimizer::adaptLearningRateALAP1 ( Vec  old_gradient,
Vec  new_gradient 
) [private]

Definition at line 199 of file AdaptGradientOptimizer.cc.

References adapt_coeff1, j, learning_rate, max_learning_rate, min_learning_rate, PLearn::VarArray::nelems(), and PLearn::Optimizer::params.

Referenced by optimizeN().

                      {
    int j = 0; // the current index in learning_rates
    real prod = 0;
    for (j = 0; j<params.nelems(); j++) {
        prod += old_gradient[j] * new_gradient[j];
    }
    // The division by j=params.nelems() is a scaling coeff
    learning_rate = learning_rate + adapt_coeff1 * prod / real(j);
    if (learning_rate < min_learning_rate) {
        learning_rate = min_learning_rate;
    } else if (learning_rate > max_learning_rate) {
        learning_rate = max_learning_rate;
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::AdaptGradientOptimizer::adaptLearningRateBasic ( Vec  old_params,
Vec  new_evol 
) [private]

Definition at line 219 of file AdaptGradientOptimizer.cc.

References adapt_coeff1, adapt_coeff2, PLearn::TVec< T >::data(), PLearn::diff(), i, j, learning_rates, max_learning_rate, min_learning_rate, PLearn::Optimizer::params, PLearn::TVec< T >::size(), and u.

Referenced by optimizeN().

                  {
    Var* array = params->data();
    int j = 0;
    int k;
    real u; // used to store old_evol[j]
    for (int i=0; i<params.size(); i++) {
        k = j;
        for (; j<k+array[i]->nelems(); j++) {
            u = old_evol[j];
            real diff = array[i]->valuedata[j-k] - old_params[j];
            if (diff > 0) {
                // the parameter has increased
                if (u > 0) {
                    old_evol[j]++;
                } else {
                    old_evol[j] = +1;
                }
            } else if (diff < 0) {
                // the parameter has decreased
                if (u < 0) {
                    old_evol[j]--;
                } else {
                    old_evol[j] = -1;
                }
            } else {
                // there has been no change
                old_evol[j] = 0;
            }
            if (u * old_evol[j] > 0) {
                // consecutive updates in the same direction
                learning_rates[j] += learning_rates[j] * adapt_coeff1;
            }
            else if (u * old_evol[j] < 0) {
                // oscillation
                learning_rates[j] -= learning_rates[j] * adapt_coeff2;
            }
     
            if (learning_rates[j] < min_learning_rate) {
                learning_rates[j] = min_learning_rate;
            } else if (learning_rates[j] > max_learning_rate) {
                learning_rates[j] = max_learning_rate;
            }
        }
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::AdaptGradientOptimizer::adaptLearningRateVariance ( ) [private]

Definition at line 270 of file AdaptGradientOptimizer.cc.

References adapt_coeff1, PLearn::TVec< T >::clear(), count_updates, j, learning_rates, max_learning_rate, min_learning_rate, PLearn::VarArray::nelems(), PLearn::Optimizer::params, PLearn::Optimizer::stage, store_grad, store_quad_grad, and store_var_grad.

Referenced by optimizeN().

                                                       {
    real moy_var = 0;
    real exp_avg_coeff = 0;
    if (stage > 1) {
        exp_avg_coeff = adapt_coeff1;
    }
    for (int j=0; j<params.nelems(); j++) {
        // Compute variance
        store_var_grad[j] = 
            store_var_grad[j] * exp_avg_coeff +
            (store_quad_grad[j] - store_grad[j]*store_grad[j] / real(count_updates))
            * (1 - exp_avg_coeff);
        moy_var += store_var_grad[j];
    }
    count_updates = 0;
    store_quad_grad.clear();
    store_grad.clear();
    moy_var /= real(params.nelems());
    int nb_low_var = 0, nb_high_var = 0;
    real var_limit = 1.0;
    for (int j=0; j<params.nelems(); j++) {
        if (store_var_grad[j] <= moy_var * var_limit) {
            learning_rates[j] = max_learning_rate;
            nb_low_var++;
        } else {
            learning_rates[j] = min_learning_rate;
            nb_high_var++;
        }
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

virtual void PLearn::AdaptGradientOptimizer::build ( ) [inline, virtual]

Post-constructor.

The normal implementation should call simply inherited::build(), then this class's build_(). This method should be callable again at later times, after modifying some option fields to change the "architecture" of the object.

Reimplemented from PLearn::Optimizer.

Definition at line 118 of file AdaptGradientOptimizer.h.

void PLearn::AdaptGradientOptimizer::build_ ( ) [private]

Object-specific post-constructor.

This method should be redefined in subclasses and do the actual building of the object according to previously set option fields. Constructors can just set option fields, and then call build_. This method is NOT virtual, and will typically be called only from three places: a constructor, the public virtual build() method, and possibly the public virtual read method (which calls its parent's read). build_() can assume that its parent's build_() has already been called.

Reimplemented from PLearn::Optimizer.

Definition at line 155 of file AdaptGradientOptimizer.cc.

References PLearn::TVec< T >::clear(), PLearn::VarArray::clearGradient(), PLearn::Optimizer::computeOppositeGradient(), PLearn::VarArray::copyTo(), PLearn::Optimizer::cost, count_updates, PLearn::Optimizer::early_stop, PLearn::TVec< T >::fill(), gradient, learning_rate, learning_rate_adaptation, learning_rates, n, PLearn::VarArray::nelems(), PLearn::SumOfVariable::nsamples, old_evol, oldgradientlocations, PLearn::Optimizer::params, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), start_learning_rate, stochastic_hack, store_grad, store_quad_grad, store_var_grad, and tmp_storage.

                                   {
    early_stop = false;
    count_updates = 0;
    learning_rate = start_learning_rate;
    SumOfVariable* sumofvar = dynamic_cast<SumOfVariable*>((Variable*)cost);
    stochastic_hack = sumofvar!=0 && sumofvar->nsamples==1;
    params.clearGradient();
    int n = params.nelems();
    if (n > 0) {
        store_var_grad.resize(n);
        store_var_grad.clear();
        store_grad.resize(n);
        store_quad_grad.resize(n);
        store_grad.clear();
        store_quad_grad.clear();
        learning_rates.resize(n);
        gradient.resize(n);
        tmp_storage.resize(n);
        old_evol.resize(n);
        oldgradientlocations.resize(params.size());
        learning_rates.fill(start_learning_rate);
        switch (learning_rate_adaptation) {
        case 0:
            break;
        case 1:
            // tmp_storage is used to store the old parameters
            params.copyTo(tmp_storage);
            old_evol.fill(0);
            break;
        case 2:
            // tmp_storage is used to store the initial opposite gradient
            computeOppositeGradient(tmp_storage);
            break;
        case 3:
            break;
        default:
            break;
        }
    }
}

Here is the call graph for this function:

string PLearn::AdaptGradientOptimizer::classname ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 150 of file AdaptGradientOptimizer.cc.

void PLearn::AdaptGradientOptimizer::declareOptions ( OptionList ol) [static, protected]

Declare options (data fields) for the class.

Redefine this in subclasses: call declareOption(...) for each option, and then call inherited::declareOptions(options). Please call the inherited method AT THE END to get the options listed in a consistent order (from most recently defined to least recently defined).

  static void MyDerivedClass::declareOptions(OptionList& ol)
  {
      declareOption(ol, "inputsize", &MyObject::inputsize_,
                    OptionBase::buildoption,
                    "The size of the input; it must be provided");
      declareOption(ol, "weights", &MyObject::weights,
                    OptionBase::learntoption,
                    "The learned model weights");
      inherited::declareOptions(ol);
  }
Parameters:
olList of options that is progressively being constructed for the current class.

Reimplemented from PLearn::Optimizer.

Definition at line 104 of file AdaptGradientOptimizer.cc.

References adapt_coeff1, adapt_coeff2, adapt_every, PLearn::OptionBase::buildoption, PLearn::declareOption(), PLearn::Optimizer::declareOptions(), decrease_constant, learning_rate_adaptation, max_learning_rate, min_learning_rate, and start_learning_rate.

{
    declareOption(ol, "start_learning_rate", &AdaptGradientOptimizer::start_learning_rate, OptionBase::buildoption, 
                  "    the initial learning rate\n");

    declareOption(ol, "min_learning_rate", &AdaptGradientOptimizer::min_learning_rate, OptionBase::buildoption, 
                  "    the minimum value for the learning rate, when there is learning rate adaptation\n");

    declareOption(ol, "max_learning_rate", &AdaptGradientOptimizer::max_learning_rate, OptionBase::buildoption, 
                  "    the maximum value for the learning rate, when there is learning rate adaptation\n");

    declareOption(ol, "adapt_coeff1", &AdaptGradientOptimizer::adapt_coeff1, OptionBase::buildoption, 
                  "    a coefficient for learning rate adaptation, use may depend on the kind of adaptation\n");

    declareOption(ol, "adapt_coeff2", &AdaptGradientOptimizer::adapt_coeff2, OptionBase::buildoption, 
                  "    a coefficient for learning rate adaptation, use may depend on the kind of adaptation\n");

    declareOption(ol, "decrease_constant", &AdaptGradientOptimizer::decrease_constant, OptionBase::buildoption, 
                  "    the learning rate decrease constant : each update of the weights is scaled by the\n\
         coefficient 1/(1 + stage * decrease_constant)\n");

    declareOption(ol, "learning_rate_adaptation", &AdaptGradientOptimizer::learning_rate_adaptation, OptionBase::buildoption, 
                  "    the way the learning rates evolve :\n\
          - 0  : no adaptation\n\
          - 1  : basic adaptation :\n\
                   if the gradient of the weight i has the same sign for two consecutive epochs\n\
                     then lr(i) = lr(i) + lr(i) * adapt_coeff1\n\
                     else lr(i) = lr(i) - lr(i) * adapt_coeff2\n\
          - 2  : ALAP1 formula. See code (not really tested)\n\
          - 3  : variance-dependent learning rate :\n\
                   let avg(i) be the exponential average of the variance of the gradient of the weight i\n\
                   over the past epochs, where the coefficient for the exponential average is adapt_coeff1\n\
                   (adapt_coeff1 = 0 means no average)\n\
                   if avg(i) is low (ie < average of all avg(j))\n\
                     then lr(i) = max_learning_rate\n\
                     else lr(i) = min_learning_rate\n");

    declareOption(ol, "adapt_every", &AdaptGradientOptimizer::adapt_every, OptionBase::buildoption, 
                  "    the learning rate adaptation will occur after adapt_every updates of the weights (0 means after each epoch)\n");

    inherited::declareOptions(ol);
}

Here is the call graph for this function:

static const PPath& PLearn::AdaptGradientOptimizer::declaringFile ( ) [inline, static]

Reimplemented from PLearn::Optimizer.

Definition at line 115 of file AdaptGradientOptimizer.h.

AdaptGradientOptimizer * PLearn::AdaptGradientOptimizer::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::Optimizer.

Definition at line 150 of file AdaptGradientOptimizer.cc.

OptionList & PLearn::AdaptGradientOptimizer::getOptionList ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 150 of file AdaptGradientOptimizer.cc.

OptionMap & PLearn::AdaptGradientOptimizer::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 150 of file AdaptGradientOptimizer.cc.

RemoteMethodMap & PLearn::AdaptGradientOptimizer::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 150 of file AdaptGradientOptimizer.cc.

virtual void PLearn::AdaptGradientOptimizer::makeDeepCopyFromShallowCopy ( CopiesMap copies) [inline, virtual]

Does the necessary operations to transform a shallow copy (this) into a deep copy by deep-copying all the members that need to be.

This needs to be overridden by every class that adds "complex" data members to the class, such as Vec, Mat, PP<Something>, etc. Typical implementation:

  void CLASS_OF_THIS::makeDeepCopyFromShallowCopy(CopiesMap& copies)
  {
      inherited::makeDeepCopyFromShallowCopy(copies);
      deepCopyField(complex_data_member1, copies);
      deepCopyField(complex_data_member2, copies);
      ...
  }
Parameters:
copiesA map used by the deep-copy mechanism to keep track of already-copied objects.

Reimplemented from PLearn::Optimizer.

Definition at line 116 of file AdaptGradientOptimizer.h.

real PLearn::AdaptGradientOptimizer::optimize ( ) [virtual]

Definition at line 304 of file AdaptGradientOptimizer.cc.

References PLERROR.

{
    PLERROR("In AdaptGradientOptimizer::optimize Deprecated, use OptimizeN !");
    return 0;
}
bool PLearn::AdaptGradientOptimizer::optimizeN ( VecStatsCollector stats_coll) [virtual]

Main optimization method, to be defined in subclasses.

Return true iff no further optimization is possible.

Implements PLearn::Optimizer.

Definition at line 313 of file AdaptGradientOptimizer.cc.

References adapt_every, adaptLearningRateALAP1(), adaptLearningRateBasic(), adaptLearningRateVariance(), PLearn::VarArray::clearGradient(), PLearn::VarArray::copyGradientTo(), PLearn::VarArray::copyTo(), PLearn::Optimizer::cost, count_updates, decrease_constant, PLearn::Optimizer::early_stop, PLearn::endl(), PLearn::VarArray::fbprop(), gradient, i, learning_rate, learning_rate_adaptation, learning_rates, n, PLearn::VarArray::nelems(), PLearn::Optimizer::nstages, old_evol, oldgradientlocations, PLearn::Optimizer::params, PLearn::Optimizer::proppath, PLearn::TVec< T >::size(), PLearn::Optimizer::stage, start_learning_rate, stochastic_hack, store_grad, store_quad_grad, tmp_storage, PLearn::VarArray::update(), PLearn::VecStatsCollector::update(), and PLearn::VarArray::updateAndClear().

                                                                    {

    bool adapt = (learning_rate_adaptation != 0);
    stochastic_hack = stochastic_hack && !adapt;
    if (adapt_every == 0) {
        adapt_every = nstages;  // the number of steps to complete an epoch
    }

    // Big hack for the special case of stochastic gradient, to avoid doing an explicit update
    // (temporarily change the gradient fields of the parameters to point to the parameters themselves,
    // so that gradients are "accumulated" directly in the parameters, thus updating them!
    if(stochastic_hack) {
        int n = params.size();
        for(int i=0; i<n; i++)
            oldgradientlocations[i] = params[i]->defineGradientLocation(params[i]->matValue);
    }

    int stage_max = stage + nstages; // the stage to reach

    for (; !early_stop && stage<stage_max; stage++) {

        // Take into account the learning rate decrease
        // This is actually done during the update step, except when there is no
        // learning rate adaptation
        switch (learning_rate_adaptation) {
        case 0:
            learning_rate = start_learning_rate/(1.0+decrease_constant*stage);
            break;
        default:
            break;
        }

        proppath.clearGradient();
        if (adapt)
            cost->gradient[0] = -1.;
        else
            cost->gradient[0] = -learning_rate;

        proppath.fbprop();

        // Actions to take after each step, depending on the
        // adaptation method used :
        // - moving along the chosen direction
        // - adapting the learning rate
        // - storing some data
        real coeff = 1/(1.0 + stage * decrease_constant); // the scaling cofficient
        switch (learning_rate_adaptation) {
        case 0:
            if (!stochastic_hack) {
                params.updateAndClear();
            }
            break;
        case 1:
            params.copyGradientTo(gradient);
            // TODO Not really efficient, write a faster update ?
            params.update(learning_rates, gradient, coeff); 
            params.clearGradient();
            break;
        case 2:
            params.copyGradientTo(gradient);
            adaptLearningRateALAP1(tmp_storage, gradient);
            params.update(learning_rate, gradient);
            tmp_storage << gradient;
            params.clearGradient();
            break;
        case 3:
            // storing sum and sum-of-squares of the gradient in order to compute
            // the variance later
            params.copyGradientTo(gradient);
            for (int i=0; i<params.nelems(); i++) {
                store_grad[i] += gradient[i];
                store_quad_grad[i] += gradient[i] * gradient[i];
            }
            count_updates++;
            params.update(learning_rates, gradient, coeff);
            params.clearGradient();
            break;
        default:
            break;
        }

        if ((stage + 1) % adapt_every == 0) {
            // Time for learning rate adaptation
            switch (learning_rate_adaptation) {
            case 0:
                break;
            case 1:
                adaptLearningRateBasic(tmp_storage, old_evol);
                params.copyTo(tmp_storage);
                break;
            case 2:
                // Nothing, the adaptation is after each example
                break;
            case 3:
                adaptLearningRateVariance();
                break;
            default:
                break;
            }
        }

        stats_coll.update(cost->value);
    }

    if(stochastic_hack) // restore the gradients as they previously were...
    {
        int n = params.size();
        for(int i=0; i<n; i++)
            params[i]->defineGradientLocation(oldgradientlocations[i]);
    }

    if (early_stop)
        cout << "Early Stopping !" << endl;
    return early_stop;
}

Here is the call graph for this function:


Member Data Documentation

Reimplemented from PLearn::Optimizer.

Definition at line 115 of file AdaptGradientOptimizer.h.

a coefficient for learning rate adaptation

Definition at line 73 of file AdaptGradientOptimizer.h.

Referenced by adaptLearningRateALAP1(), adaptLearningRateBasic(), adaptLearningRateVariance(), and declareOptions().

a coefficient for learning rate adaptation

Definition at line 74 of file AdaptGradientOptimizer.h.

Referenced by adaptLearningRateBasic(), and declareOptions().

after how many updates we adapt learning rate

Definition at line 71 of file AdaptGradientOptimizer.h.

Referenced by declareOptions(), and optimizeN().

Definition at line 109 of file AdaptGradientOptimizer.h.

Referenced by adaptLearningRateVariance(), build_(), and optimizeN().

Definition at line 76 of file AdaptGradientOptimizer.h.

Referenced by declareOptions(), and optimizeN().

Definition at line 100 of file AdaptGradientOptimizer.h.

Referenced by build_(), and optimizeN().

gradient descent specific parameters (directly modifiable by the user)

Definition at line 80 of file AdaptGradientOptimizer.h.

Referenced by adaptLearningRateALAP1(), build_(), and optimizeN().

Learning rate adaptation kind : 0 : none 1 : basic 2 : ALAP1 3 : variance.

Definition at line 87 of file AdaptGradientOptimizer.h.

Referenced by build_(), declareOptions(), and optimizeN().

max value for learning_rate when adapting

Definition at line 89 of file AdaptGradientOptimizer.h.

Referenced by adaptLearningRateALAP1(), adaptLearningRateBasic(), adaptLearningRateVariance(), and declareOptions().

min value for learning_rate when adapting

Definition at line 90 of file AdaptGradientOptimizer.h.

Referenced by adaptLearningRateALAP1(), adaptLearningRateBasic(), adaptLearningRateVariance(), and declareOptions().

Definition at line 92 of file AdaptGradientOptimizer.h.

Definition at line 104 of file AdaptGradientOptimizer.h.

Referenced by build_(), and optimizeN().

Definition at line 105 of file AdaptGradientOptimizer.h.

Referenced by build_(), and optimizeN().

initial learning rate

Definition at line 94 of file AdaptGradientOptimizer.h.

Referenced by build_(), declareOptions(), and optimizeN().

Definition at line 98 of file AdaptGradientOptimizer.h.

Referenced by build_(), and optimizeN().

Definition at line 107 of file AdaptGradientOptimizer.h.

Referenced by adaptLearningRateVariance(), build_(), and optimizeN().

Definition at line 108 of file AdaptGradientOptimizer.h.

Referenced by adaptLearningRateVariance(), build_(), and optimizeN().

Definition at line 106 of file AdaptGradientOptimizer.h.

Referenced by adaptLearningRateVariance(), and build_().

Definition at line 101 of file AdaptGradientOptimizer.h.

Referenced by build_(), and optimizeN().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines