PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Static Protected Member Functions | Protected Attributes | Private Types | Private Member Functions | Private Attributes
PLearn::GradNNetLayerModule Class Reference

Affine transformation module, with stochastic gradient descent updates. More...

#include <GradNNetLayerModule.h>

Inheritance diagram for PLearn::GradNNetLayerModule:
Inheritance graph
[legend]
Collaboration diagram for PLearn::GradNNetLayerModule:
Collaboration graph
[legend]

List of all members.

Public Member Functions

 GradNNetLayerModule ()
 Default constructor.
virtual void fprop (const Vec &input, Vec &output) const
 given the input, compute the output (possibly resize it appropriately) SOON TO BE DEPRECATED, USE fprop(const TVec<Mat*>& ports_value)
virtual void fprop (const Mat &inputs, Mat &outputs)
 Overridden.
virtual void bpropUpdate (const Vec &input, const Vec &output, const Vec &output_gradient)
 SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient) Adapt based on the output gradient: this method should only be called just after a corresponding fprop; it should be called with the same arguments as fprop for the first two arguments (and output should not have been modified since then).
virtual void bpropUpdate (const Vec &input, const Vec &output, Vec &input_gradient, const Vec &output_gradient, bool accumulate=false)
 SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient) this version allows to obtain the input gradient as well N.B.
virtual void bpropUpdate (const Mat &inputs, const Mat &outputs, Mat &input_gradients, const Mat &output_gradients, bool accumulate=false)
 SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient)
virtual void bbpropUpdate (const Vec &input, const Vec &output, const Vec &output_gradient, const Vec &output_diag_hessian)
 Similar to bpropUpdate, but adapt based also on the estimation of the diagonal of the Hessian matrix, and propagates this back.
virtual void forget ()
 reset the parameters to the state they would be BEFORE starting training.
virtual void setLearningRate (real dynamic_learning_rate)
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual GradNNetLayerModuledeepCopy (CopiesMap &copies) const
virtual void build ()
 Post-constructor.
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.

Static Public Member Functions

static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

real start_learning_rate
 Starting learning-rate, by which we multiply the gradient step.
real decrease_constant
 learning_rate = start_learning_rate / (1 + decrease_constant*t), where t is the number of updates since the beginning
Mat init_weights
 Optional initial weights of the neurons (one row per neuron).
Vec init_bias
 Optional initial bias of the neurons.
real init_weights_random_scale
 If init_weights is not provided, the weights are initialized randomly from a uniform in [-r,r], with r = init_weights_random_scale/input_size.
real L1_penalty_factor
 Optional (default=0) factor of L1 regularization term.
real L2_penalty_factor
 Optional (default=0) factor of L2 regularization term.
Mat weights
 The weights, one neuron per line.
Vec bias
 The bias.

Static Public Attributes

static StaticInitializer _static_initializer_

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares the class options.

Protected Attributes

Vec ones
 A vector filled with all ones.

Private Types

typedef OnlineLearningModule inherited

Private Member Functions

void build_ ()
 This does the actual building.
void resizeOnes (int n)
 Resize vector 'ones'.

Private Attributes

real learning_rate
int step_number

Detailed Description

Affine transformation module, with stochastic gradient descent updates.

Neural Network layer, using stochastic gradient to update neuron weights, Output = weights * Input + bias Weights and bias are updated by online gradient descent, with learning rate possibly decreasing in 1/(1 + n_updates_done * decrease_constant). An L1 and L2 regularization penalty can be added to push weights to 0. Weights can be initialized to 0, to a given initial matrix, or randomly from a uniform distribution.

Definition at line 65 of file GradNNetLayerModule.h.


Member Typedef Documentation

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 67 of file GradNNetLayerModule.h.


Constructor & Destructor Documentation

PLearn::GradNNetLayerModule::GradNNetLayerModule ( )

Default constructor.

Definition at line 65 of file GradNNetLayerModule.cc.


Member Function Documentation

string PLearn::GradNNetLayerModule::_classname_ ( ) [static]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 60 of file GradNNetLayerModule.cc.

OptionList & PLearn::GradNNetLayerModule::_getOptionList_ ( ) [static]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 60 of file GradNNetLayerModule.cc.

RemoteMethodMap & PLearn::GradNNetLayerModule::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 60 of file GradNNetLayerModule.cc.

bool PLearn::GradNNetLayerModule::_isa_ ( const Object o) [static]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 60 of file GradNNetLayerModule.cc.

Object * PLearn::GradNNetLayerModule::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::Object.

Definition at line 60 of file GradNNetLayerModule.cc.

StaticInitializer GradNNetLayerModule::_static_initializer_ & PLearn::GradNNetLayerModule::_static_initialize_ ( ) [static]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 60 of file GradNNetLayerModule.cc.

void PLearn::GradNNetLayerModule::bbpropUpdate ( const Vec input,
const Vec output,
const Vec output_gradient,
const Vec output_diag_hessian 
) [virtual]

Similar to bpropUpdate, but adapt based also on the estimation of the diagonal of the Hessian matrix, and propagates this back.

If these methods are defined, you can use them INSTEAD of bpropUpdate(...) THE DEFAULT IMPLEMENTATION PROVIDED HERE JUST CALLS bbpropUpdate(input, output, input_gradient, output_gradient, in_hess, out_hess) AND IGNORES INPUT HESSIAN AND INPUT GRADIENT

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 290 of file GradNNetLayerModule.cc.

References bpropUpdate(), PLearn::OnlineLearningModule::output_size, PLASSERT_MSG, and PLearn::TVec< T >::size().

{
    PLASSERT_MSG( output_diag_hessian.size() == output_size,
                  "output_diag_hessian.size() should be equal to"
                  " this->output_size" );
    bpropUpdate( input, output, output_gradient );
}

Here is the call graph for this function:

void PLearn::GradNNetLayerModule::bpropUpdate ( const Vec input,
const Vec output,
Vec input_gradient,
const Vec output_gradient,
bool  accumulate = false 
) [virtual]

SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient) this version allows to obtain the input gradient as well N.B.

THE DEFAULT IMPLEMENTATION JUST RAISES A PLERROR. The flag indicates whether the input_gradients gets accumulated into or set with the computed derivatives.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 157 of file GradNNetLayerModule.cc.

References bias, PLearn::TVec< T >::clear(), decrease_constant, i, PLearn::OnlineLearningModule::input_size, j, L1_penalty_factor, L2_penalty_factor, learning_rate, PLearn::OnlineLearningModule::output_size, PLASSERT_MSG, PLWARNING, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), start_learning_rate, step_number, and weights.

{
    PLASSERT_MSG( input.size() == input_size,
                  "input.size() should be equal to this->input_size" );
    PLASSERT_MSG( output.size() == output_size,
                  "output.size() should be equal to this->output_size" );
    PLASSERT_MSG( output_gradient.size() == output_size,
                  "output_gradient.size() should be equal to this->output_size"
                );

    if( accumulate )
    {
        PLASSERT_MSG( input_gradient.size() == input_size,
                      "Cannot resize input_gradient AND accumulate into it" );
    }
    else
    {
        input_gradient.resize( input_size );
        input_gradient.clear();
    }

    learning_rate = start_learning_rate / (1+decrease_constant*step_number);

    for( int i=0; i<output_size; i++ )
    {
        real og_i = output_gradient[i];
        real* w_ = weights[i];

        real delta_L1 = learning_rate * L1_penalty_factor;
        real delta_L2 = learning_rate * L2_penalty_factor;
        if( delta_L2 > 1 )
            PLWARNING("GradNNetLayerModule::bpropUpdate:\n"
                      "learning rate = %f is too large!\n", learning_rate);

        real lr_og_i = learning_rate * og_i;
        bias[i] -= lr_og_i;

        for( int j=0; j<input_size; j++ )
        {
            input_gradient[j] += w_[j] * og_i;

            if( delta_L2 > 0. )
                w_[j] *= (1 - delta_L2);

            w_[j] -= input[j] * lr_og_i;

            if( delta_L1 > 0. )
            {
                if( w_[j] > delta_L1 )
                    w_[j] -= delta_L1;
                else if( w_[j] < -delta_L1 )
                    w_[j] += delta_L1;
                else
                    w_[j] = 0.;
            }

        }
    }
    step_number++;
}

Here is the call graph for this function:

void PLearn::GradNNetLayerModule::bpropUpdate ( const Mat inputs,
const Mat outputs,
Mat input_gradients,
const Mat output_gradients,
bool  accumulate = false 
) [virtual]

SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient)

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 221 of file GradNNetLayerModule.cc.

References bias, decrease_constant, PLearn::TMat< T >::fill(), i, PLearn::OnlineLearningModule::input_size, j, L1_penalty_factor, L2_penalty_factor, learning_rate, PLearn::TMat< T >::length(), n, ones, PLearn::OnlineLearningModule::output_size, PLASSERT, PLASSERT_MSG, PLearn::productAcc(), PLearn::TMat< T >::resize(), resizeOnes(), start_learning_rate, step_number, PLearn::transposeProductScaleAcc(), weights, and PLearn::TMat< T >::width().

{
    PLASSERT( inputs.width() == input_size );
    PLASSERT( outputs.width() == output_size );
    PLASSERT( output_gradients.width() == output_size );

    int n = inputs.length();

    if( accumulate )
    {
        PLASSERT_MSG( input_gradients.width() == input_size &&
                input_gradients.length() == n,
                "Cannot resize input_gradients and accumulate into it" );
    }
    else
    {
        input_gradients.resize(n, input_size);
        input_gradients.fill(0);
    }

    learning_rate = start_learning_rate / (1+decrease_constant*step_number);
    real avg_lr = learning_rate / n; // To obtain an average on a mini-batch.

    // With L2 regularization, weights are scaled by a coefficient equal to
    // 1 - learning rate * penalty.
    real l2_scaling =
        L2_penalty_factor > 0 ? 1 - learning_rate * L2_penalty_factor
                              : 1;
    PLASSERT_MSG(l2_scaling > 0, "Learning rate too large");

    // Compute input gradient.
    productAcc(input_gradients, output_gradients, weights);

    // Update bias.
    resizeOnes(n);
    transposeProductScaleAcc(bias, output_gradients, ones, -avg_lr, real(1));

    // Update weights.
    transposeProductScaleAcc(weights, output_gradients, inputs,
                             -avg_lr, l2_scaling);

    // Apply L1 penalty if needed (note: this is not very efficient).
    if (L1_penalty_factor > 0) {
        real delta_L1 = learning_rate * L1_penalty_factor;
        for( int i=0; i<output_size; i++ )
        {
            real* w_ = weights[i];
            for( int j=0; j<input_size; j++ )
            {
                real& w_ij = w_[j];
                if( w_ij > delta_L1 )
                    w_ij -= delta_L1;
                else if( w_ij < -delta_L1 )
                    w_ij += delta_L1;
                else
                    w_ij = 0.;
            }
        }
    }
    step_number += n;
}

Here is the call graph for this function:

void PLearn::GradNNetLayerModule::bpropUpdate ( const Vec input,
const Vec output,
const Vec output_gradient 
) [virtual]

SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient) Adapt based on the output gradient: this method should only be called just after a corresponding fprop; it should be called with the same arguments as fprop for the first two arguments (and output should not have been modified since then).

Since sub-classes are supposed to learn ONLINE, the object is 'ready-to-be-used' just after any bpropUpdate. N.B. The DEFAULT IMPLEMENTATION JUST CALLS bpropUpdate(input, output, input_gradient, output_gradient) AND IGNORES INPUT GRADIENT.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 106 of file GradNNetLayerModule.cc.

References bias, decrease_constant, i, PLearn::OnlineLearningModule::input_size, j, L1_penalty_factor, L2_penalty_factor, learning_rate, PLearn::OnlineLearningModule::output_size, PLASSERT_MSG, PLWARNING, PLearn::TVec< T >::size(), start_learning_rate, step_number, and weights.

Referenced by bbpropUpdate().

{
    PLASSERT_MSG( input.size() == input_size,
                  "input.size() should be equal to this->input_size" );
    PLASSERT_MSG( output.size() == output_size,
                  "output.size() should be equal to this->output_size" );
    PLASSERT_MSG( output_gradient.size() == output_size,
                  "output_gradient.size() should be equal to this->output_size"
                );

    learning_rate = start_learning_rate / (1+decrease_constant*step_number);

    for( int i=0; i<output_size; i++ )
    {
        real og_i = output_gradient[i];
        real* w_ = weights[i];

        real delta_L1 = learning_rate * L1_penalty_factor;
        real delta_L2 = learning_rate * L2_penalty_factor;
        if( delta_L2 > 1 )
            PLWARNING("GradNNetLayerModule::bpropUpdate:\n"
                      "learning rate = %f is too large!\n", learning_rate);

        real lr_og_i = learning_rate * og_i;
        bias[i] -= lr_og_i;

        for( int j=0; j<input_size; j++ )
        {
            if( delta_L2 > 0. )
                w_[j] *= (1 - delta_L2);

            w_[j] -= input[j] * lr_og_i;

            if( delta_L1 > 0. )
            {
                if( w_[j] > delta_L1 )
                    w_[j] -= delta_L1;
                else if( w_[j] < -delta_L1 )
                    w_[j] += delta_L1;
                else
                    w_[j] = 0.;
            }

        }
    }
    step_number++;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::GradNNetLayerModule::build ( ) [virtual]

Post-constructor.

The normal implementation should call simply inherited::build(), then this class's build_(). This method should be callable again at later times, after modifying some option fields to change the "architecture" of the object.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 368 of file GradNNetLayerModule.cc.

References PLearn::OnlineLearningModule::build(), and build_().

Referenced by PLearn::TopDownAsymetricDeepNetwork::build_output_layer_and_cost(), PLearn::StackedFocusedAutoassociatorsNet::build_output_layer_and_cost(), and PLearn::DiscriminativeDeepBeliefNet::build_output_layer_and_cost().

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::GradNNetLayerModule::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 447 of file GradNNetLayerModule.cc.

References bias, forget(), PLearn::OnlineLearningModule::input_size, PLearn::TMat< T >::length(), PLearn::OnlineLearningModule::output_size, PLERROR, PLearn::TVec< T >::size(), weights, and PLearn::TMat< T >::width().

Referenced by build().

{
    if( input_size < 0 ) // has not been initialized
        return;

    if( output_size < 0 )
        PLERROR("GradNNetLayerModule::build_: 'output_size' is < 0 (%i),\n"
                " you should set it to a positive integer (the number of"
                " neurons).\n", output_size);

    if( weights.length() != output_size
        || weights.width() != input_size
        || bias.size() != output_size )
    {
        forget();
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

string PLearn::GradNNetLayerModule::classname ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 60 of file GradNNetLayerModule.cc.

void PLearn::GradNNetLayerModule::declareOptions ( OptionList ol) [static, protected]

Declares the class options.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 391 of file GradNNetLayerModule.cc.

References bias, PLearn::OptionBase::buildoption, PLearn::declareOption(), PLearn::OnlineLearningModule::declareOptions(), decrease_constant, init_bias, init_weights, init_weights_random_scale, L1_penalty_factor, L2_penalty_factor, PLearn::OptionBase::learntoption, start_learning_rate, and weights.

{
    declareOption(ol, "start_learning_rate",
                  &GradNNetLayerModule::start_learning_rate,
                  OptionBase::buildoption,
                  "Learning-rate of stochastic gradient optimization");

    declareOption(ol, "decrease_constant",
                  &GradNNetLayerModule::decrease_constant,
                  OptionBase::buildoption,
                  "Decrease constant of stochastic gradient optimization");

    declareOption(ol, "init_weights", &GradNNetLayerModule::init_weights,
                  OptionBase::buildoption,
                  "Optional initial weights of the neurons (one row per neuron).\n"
                  "If not provided then weights are initialized according to a uniform\n"
                  "distribution (see init_weights_random_scale)\n");

    declareOption(ol, "init_bias", &GradNNetLayerModule::init_bias,
                  OptionBase::buildoption,
                  "Optional initial bias of the neurons. If not provided, they are set to 0.\n");

    declareOption(ol, "init_weights_random_scale",
                  &GradNNetLayerModule::init_weights_random_scale,
                  OptionBase::buildoption,
                  "If init_weights is not provided, the weights are initialized randomly\n"
                  "from a uniform in [-r,r], with r = init_weights_random_scale/input_size.\n"
                  "To clear the weights initially, just set this option to 0.");

    declareOption(ol, "L1_penalty_factor",
                  &GradNNetLayerModule::L1_penalty_factor,
                  OptionBase::buildoption,
                  "Optional (default=0) factor of L1 regularization term, i.e.\n"
                  "minimize L1_penalty_factor * sum_{ij} |weights(i,j)| during training.\n");

    declareOption(ol, "L2_penalty_factor",
                  &GradNNetLayerModule::L2_penalty_factor,
                  OptionBase::buildoption,
                  "Optional (default=0) factor of L2 regularization term, i.e.\n"
                  "minimize 0.5 * L2_penalty_factor * sum_{ij} weights(i,j)^2 during training.\n");


    declareOption(ol, "weights", &GradNNetLayerModule::weights,
                  OptionBase::learntoption,
                  "Input weights of the neurons (one row per neuron)");

    declareOption(ol, "bias", &GradNNetLayerModule::bias,
                  OptionBase::learntoption,
                  "Bias of the neurons");

    inherited::declareOptions(ol);
}

Here is the call graph for this function:

static const PPath& PLearn::GradNNetLayerModule::declaringFile ( ) [inline, static]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 146 of file GradNNetLayerModule.h.

:

GradNNetLayerModule * PLearn::GradNNetLayerModule::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 60 of file GradNNetLayerModule.cc.

void PLearn::GradNNetLayerModule::forget ( ) [virtual]

reset the parameters to the state they would be BEFORE starting training.

Note that this method is necessarily called from build().

Implements PLearn::OnlineLearningModule.

Definition at line 317 of file GradNNetLayerModule.cc.

References bias, PLearn::TMat< T >::clear(), PLearn::TVec< T >::clear(), init_bias, init_weights, init_weights_random_scale, PLearn::OnlineLearningModule::input_size, learning_rate, PLearn::TMat< T >::length(), PLearn::OnlineLearningModule::output_size, PLERROR, PLWARNING, PLearn::OnlineLearningModule::random_gen, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), PLearn::TMat< T >::size(), PLearn::TVec< T >::size(), start_learning_rate, step_number, weights, and PLearn::TMat< T >::width().

Referenced by build_().

{
    learning_rate = start_learning_rate;
    step_number = 0;

    bias.resize( output_size );
    if( init_bias.size() > 0 )
    {
        if( init_bias.size() != output_size )
            PLERROR( "init_bias (%d) should have length equal to output_size (%d)",
                     init_bias.size(), output_size );
        bias << init_bias;
    }
    else
        bias.clear();

    weights.resize( output_size, input_size );
    if( init_weights.size() > 0 )
    {
        if( weights.length() != output_size || weights.width() != input_size )
            PLERROR( "weights (%d,%d) should have size equal to (output_size, input_size) (%d,%d)",
                     weights.length(), weights.width(),
                     output_size, input_size );

        weights << init_weights;
    }
    else if(init_weights_random_scale != 0. )
    {
        if( !random_gen )
        {
            PLWARNING( "GradNNetLayerModule: cannot forget() without"
                       " random_gen" );
            return;
        }
        real r = init_weights_random_scale / input_size;
        random_gen->fill_random_uniform(weights, -r, r);
    }
    else
        weights.clear();
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::GradNNetLayerModule::fprop ( const Mat inputs,
Mat outputs 
) [virtual]

Overridden.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 89 of file GradNNetLayerModule.cc.

References bias, PLearn::externalProductAcc(), PLearn::OnlineLearningModule::input_size, PLearn::TMat< T >::length(), n, ones, PLearn::OnlineLearningModule::output_size, PLASSERT, PLearn::productTranspose(), PLearn::TMat< T >::resize(), resizeOnes(), weights, and PLearn::TMat< T >::width().

{
    PLASSERT( inputs.width() == input_size );
    int n = inputs.length();
    outputs.resize(n, output_size);
    productTranspose(outputs, inputs, weights);

    // Add bias.
    resizeOnes(n);
    externalProductAcc(outputs, ones, bias); // could be more efficient, but not critical
}

Here is the call graph for this function:

void PLearn::GradNNetLayerModule::fprop ( const Vec input,
Vec output 
) const [virtual]

given the input, compute the output (possibly resize it appropriately) SOON TO BE DEPRECATED, USE fprop(const TVec<Mat*>& ports_value)

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 77 of file GradNNetLayerModule.cc.

References bias, PLearn::dot(), i, PLearn::OnlineLearningModule::input_size, PLearn::OnlineLearningModule::output_size, PLASSERT_MSG, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), and weights.

{
    PLASSERT_MSG( input.size() == input_size,
                  "input.size() should be equal to this->input_size" );

    output.resize( output_size );

    // Applies linear transformation
    for( int i=0 ; i<output_size ; i++ )
        output[i] = dot( weights(i), input ) + bias[i];
}

Here is the call graph for this function:

OptionList & PLearn::GradNNetLayerModule::getOptionList ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 60 of file GradNNetLayerModule.cc.

OptionMap & PLearn::GradNNetLayerModule::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 60 of file GradNNetLayerModule.cc.

RemoteMethodMap & PLearn::GradNNetLayerModule::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 60 of file GradNNetLayerModule.cc.

void PLearn::GradNNetLayerModule::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Transforms a shallow copy into a deep copy.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 377 of file GradNNetLayerModule.cc.

References bias, PLearn::deepCopyField(), init_bias, init_weights, PLearn::OnlineLearningModule::makeDeepCopyFromShallowCopy(), ones, and weights.

Here is the call graph for this function:

void PLearn::GradNNetLayerModule::resizeOnes ( int  n) [private]

Resize vector 'ones'.

Definition at line 468 of file GradNNetLayerModule.cc.

References PLearn::TVec< T >::fill(), PLearn::TVec< T >::length(), n, ones, and PLearn::TVec< T >::resize().

Referenced by bpropUpdate(), and fprop().

{
    if (ones.length() < n) {
        ones.resize(n);
        ones.fill(1);
    } else if (ones.length() > n)
        ones.resize(n);
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::GradNNetLayerModule::setLearningRate ( real  dynamic_learning_rate) [virtual]

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 358 of file GradNNetLayerModule.cc.

References start_learning_rate, and step_number.

{
    start_learning_rate = dynamic_learning_rate;
    step_number = 0;
    // learning_rate will automatically be set in bpropUpdate()
}

Member Data Documentation

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 146 of file GradNNetLayerModule.h.

The bias.

Definition at line 99 of file GradNNetLayerModule.h.

Referenced by bpropUpdate(), build_(), declareOptions(), forget(), fprop(), and makeDeepCopyFromShallowCopy().

learning_rate = start_learning_rate / (1 + decrease_constant*t), where t is the number of updates since the beginning

Definition at line 77 of file GradNNetLayerModule.h.

Referenced by bpropUpdate(), and declareOptions().

Optional initial bias of the neurons.

Definition at line 83 of file GradNNetLayerModule.h.

Referenced by declareOptions(), forget(), and makeDeepCopyFromShallowCopy().

Optional initial weights of the neurons (one row per neuron).

Definition at line 80 of file GradNNetLayerModule.h.

Referenced by declareOptions(), forget(), and makeDeepCopyFromShallowCopy().

If init_weights is not provided, the weights are initialized randomly from a uniform in [-r,r], with r = init_weights_random_scale/input_size.

Definition at line 87 of file GradNNetLayerModule.h.

Referenced by declareOptions(), and forget().

Definition at line 180 of file GradNNetLayerModule.h.

Referenced by bpropUpdate(), and forget().

A vector filled with all ones.

Definition at line 157 of file GradNNetLayerModule.h.

Referenced by bpropUpdate(), fprop(), makeDeepCopyFromShallowCopy(), and resizeOnes().

Starting learning-rate, by which we multiply the gradient step.

Definition at line 73 of file GradNNetLayerModule.h.

Referenced by bpropUpdate(), declareOptions(), forget(), and setLearningRate().

Definition at line 181 of file GradNNetLayerModule.h.

Referenced by bpropUpdate(), forget(), and setLearningRate().

The weights, one neuron per line.

Definition at line 96 of file GradNNetLayerModule.h.

Referenced by bpropUpdate(), build_(), declareOptions(), forget(), fprop(), and makeDeepCopyFromShallowCopy().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines