PLearn 0.1
|
Affine transformation module, with stochastic gradient descent updates. More...
#include <LinearFilterModule.h>
Public Member Functions | |
LinearFilterModule () | |
Default constructor. | |
virtual void | fprop (const Vec &input, Vec &output) const |
given the input, compute the output (possibly resize it appropriately) SOON TO BE DEPRECATED, USE fprop(const TVec<Mat*>& ports_value) | |
virtual void | fprop (const Mat &inputs, Mat &outputs) |
Overridden. | |
virtual void | bpropUpdate (const Vec &input, const Vec &output, const Vec &output_gradient) |
SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient) Adapt based on the output gradient: this method should only be called just after a corresponding fprop; it should be called with the same arguments as fprop for the first two arguments (and output should not have been modified since then). | |
virtual void | bpropUpdate (const Vec &input, const Vec &output, Vec &input_gradient, const Vec &output_gradient, bool accumulate=false) |
SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient) this version allows to obtain the input gradient as well N.B. | |
virtual void | bpropUpdate (const Mat &inputs, const Mat &outputs, Mat &input_gradients, const Mat &output_gradients, bool accumulate=false) |
SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient) | |
virtual void | bbpropUpdate (const Vec &input, const Vec &output, const Vec &output_gradient, const Vec &output_diag_hessian) |
Similar to bpropUpdate, but adapt based also on the estimation of the diagonal of the Hessian matrix, and propagates this back. | |
virtual void | forget () |
reset the parameters to the state they would be BEFORE starting training. | |
virtual void | setLearningRate (real dynamic_learning_rate) |
virtual string | classname () const |
virtual OptionList & | getOptionList () const |
virtual OptionMap & | getOptionMap () const |
virtual RemoteMethodMap & | getRemoteMethodMap () const |
virtual LinearFilterModule * | deepCopy (CopiesMap &copies) const |
virtual void | build () |
Post-constructor. | |
virtual void | makeDeepCopyFromShallowCopy (CopiesMap &copies) |
Transforms a shallow copy into a deep copy. | |
Static Public Member Functions | |
static string | _classname_ () |
static OptionList & | _getOptionList_ () |
static RemoteMethodMap & | _getRemoteMethodMap_ () |
static Object * | _new_instance_for_typemap_ () |
static bool | _isa_ (const Object *o) |
static void | _static_initialize_ () |
static const PPath & | declaringFile () |
Public Attributes | |
real | start_learning_rate |
Starting learning-rate, by which we multiply the gradient step. | |
real | decrease_constant |
learning_rate = start_learning_rate / (1 + decrease_constant*t), where t is the number of updates since the beginning | |
Vec | init_weights |
Optional initial weights of the neurons (one row per neuron). | |
Vec | init_bias |
Optional initial bias of the neurons. | |
real | init_weights_random_scale |
If init_weights is not provided, the weights are initialized randomly from a uniform in [-r,r], with r = init_weights_random_scale/input_size. | |
real | L1_penalty_factor |
Optional (default=0) factor of L1 regularization term. | |
real | L2_penalty_factor |
Optional (default=0) factor of L2 regularization term. | |
Vec | weights |
The weights, one neuron per line. | |
Vec | bias |
The bias. | |
bool | no_bias |
bool | between_0_and_1 |
Static Public Attributes | |
static StaticInitializer | _static_initializer_ |
Static Protected Member Functions | |
static void | declareOptions (OptionList &ol) |
Declares the class options. | |
Protected Attributes | |
Vec | ones |
A vector filled with all ones. | |
Private Types | |
typedef OnlineLearningModule | inherited |
Private Member Functions | |
void | build_ () |
This does the actual building. | |
void | resizeOnes (int n) |
Resize vector 'ones'. | |
Private Attributes | |
real | learning_rate |
int | step_number |
Affine transformation module, with stochastic gradient descent updates.
Neural Network layer, using stochastic gradient to update neuron weights, Output = weights * Input + bias Weights and bias are updated by online gradient descent, with learning rate possibly decreasing in 1/(1 + n_updates_done * decrease_constant). An L1 and L2 regularization penalty can be added to push weights to 0. Weights can be initialized to 0, to a given initial matrix, or randomly from a uniform distribution.
Definition at line 65 of file LinearFilterModule.h.
typedef OnlineLearningModule PLearn::LinearFilterModule::inherited [private] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 67 of file LinearFilterModule.h.
PLearn::LinearFilterModule::LinearFilterModule | ( | ) |
Default constructor.
Definition at line 65 of file LinearFilterModule.cc.
: start_learning_rate( .001 ), decrease_constant( 0. ), init_weights_random_scale( 1. ), L1_penalty_factor( 0. ), L2_penalty_factor( 0. ), no_bias(false), between_0_and_1(false), step_number( 0 ) {}
string PLearn::LinearFilterModule::_classname_ | ( | ) | [static] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 60 of file LinearFilterModule.cc.
OptionList & PLearn::LinearFilterModule::_getOptionList_ | ( | ) | [static] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 60 of file LinearFilterModule.cc.
RemoteMethodMap & PLearn::LinearFilterModule::_getRemoteMethodMap_ | ( | ) | [static] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 60 of file LinearFilterModule.cc.
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 60 of file LinearFilterModule.cc.
Object * PLearn::LinearFilterModule::_new_instance_for_typemap_ | ( | ) | [static] |
Reimplemented from PLearn::Object.
Definition at line 60 of file LinearFilterModule.cc.
StaticInitializer LinearFilterModule::_static_initializer_ & PLearn::LinearFilterModule::_static_initialize_ | ( | ) | [static] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 60 of file LinearFilterModule.cc.
void PLearn::LinearFilterModule::bbpropUpdate | ( | const Vec & | input, |
const Vec & | output, | ||
const Vec & | output_gradient, | ||
const Vec & | output_diag_hessian | ||
) | [virtual] |
Similar to bpropUpdate, but adapt based also on the estimation of the diagonal of the Hessian matrix, and propagates this back.
If these methods are defined, you can use them INSTEAD of bpropUpdate(...) THE DEFAULT IMPLEMENTATION PROVIDED HERE JUST CALLS bbpropUpdate(input, output, input_gradient, output_gradient, in_hess, out_hess) AND IGNORES INPUT HESSIAN AND INPUT GRADIENT
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 312 of file LinearFilterModule.cc.
References bpropUpdate(), PLearn::OnlineLearningModule::output_size, PLASSERT_MSG, and PLearn::TVec< T >::size().
{ PLASSERT_MSG( output_diag_hessian.size() == output_size, "output_diag_hessian.size() should be equal to" " this->output_size" ); bpropUpdate( input, output, output_gradient ); }
void PLearn::LinearFilterModule::bpropUpdate | ( | const Vec & | input, |
const Vec & | output, | ||
Vec & | input_gradient, | ||
const Vec & | output_gradient, | ||
bool | accumulate = false |
||
) | [virtual] |
SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient) this version allows to obtain the input gradient as well N.B.
THE DEFAULT IMPLEMENTATION JUST RAISES A PLERROR. The flag indicates whether the input_gradients gets accumulated into or set with the computed derivatives.
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 166 of file LinearFilterModule.cc.
References between_0_and_1, bias, PLearn::TVec< T >::clear(), decrease_constant, i, PLearn::OnlineLearningModule::input_size, L1_penalty_factor, L2_penalty_factor, learning_rate, no_bias, PLearn::OnlineLearningModule::output_size, PLASSERT_MSG, PLWARNING, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), start_learning_rate, step_number, and weights.
{ PLASSERT_MSG( input.size() == input_size, "input.size() should be equal to this->input_size" ); PLASSERT_MSG( output.size() == output_size, "output.size() should be equal to this->output_size" ); PLASSERT_MSG( output_gradient.size() == output_size, "output_gradient.size() should be equal to this->output_size" ); if( accumulate ) { PLASSERT_MSG( input_gradient.size() == input_size, "Cannot resize input_gradient AND accumulate into it" ); } else { input_gradient.resize( input_size ); input_gradient.clear(); } learning_rate = start_learning_rate / (1+decrease_constant*step_number); for( int i=0; i<output_size; i++ ) { real og_i = output_gradient[i]; real delta_L1 = learning_rate * L1_penalty_factor; real delta_L2 = learning_rate * L2_penalty_factor; if( delta_L2 > 1 ) PLWARNING("LinearFilterModule::bpropUpdate:\n" "learning rate = %f is too large!\n", learning_rate); real lr_og_i = learning_rate * og_i; if( !no_bias ) bias[i] -= lr_og_i; input_gradient[i % input_size] += weights[i] * og_i; if( delta_L2 > 0. ) weights[i] *= (1 - delta_L2); weights[i] -= input[i % input_size] * lr_og_i; if( delta_L1 > 0. ) { if( weights[i] > delta_L1 ) weights[i] -= delta_L1; else if( weights[i] < -delta_L1 ) weights[i] += delta_L1; else weights[i] = 0.; } if( between_0_and_1 ) { if( weights[i] > 1. ) weights[i] = 1.; if( weights[i] < 0. ) weights[i] = 0.; } } step_number++; }
void PLearn::LinearFilterModule::bpropUpdate | ( | const Mat & | inputs, |
const Mat & | outputs, | ||
Mat & | input_gradients, | ||
const Mat & | output_gradients, | ||
bool | accumulate = false |
||
) | [virtual] |
SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient)
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 233 of file LinearFilterModule.cc.
References between_0_and_1, bias, decrease_constant, PLearn::TMat< T >::fill(), i, PLearn::OnlineLearningModule::input_size, L1_penalty_factor, L2_penalty_factor, learning_rate, PLearn::TMat< T >::length(), n, no_bias, ones, PLearn::OnlineLearningModule::output_size, PLASSERT, PLASSERT_MSG, PLearn::TMat< T >::resize(), resizeOnes(), start_learning_rate, step_number, PLearn::transposeProductScaleAcc(), weights, and PLearn::TMat< T >::width().
{ PLASSERT( inputs.width() == input_size ); PLASSERT( outputs.width() == output_size ); PLASSERT( output_gradients.width() == output_size ); int n = inputs.length(); if( accumulate ) { PLASSERT_MSG( input_gradients.width() == input_size && input_gradients.length() == n, "Cannot resize input_gradients and accumulate into it" ); } else { input_gradients.resize(n, input_size); input_gradients.fill(0); } learning_rate = start_learning_rate / (1+decrease_constant*step_number); real avg_lr = learning_rate / n; // To obtain an average on a mini-batch. // With L2 regularization, weights are scaled by a coefficient equal to // 1 - learning rate * penalty. real l2_scaling = L2_penalty_factor > 0 ? 1 - learning_rate * L2_penalty_factor : 1; PLASSERT_MSG(l2_scaling > 0, "Learning rate too large"); // Compute input gradient. for(int i_sample = 0; i_sample < outputs.length() ;i_sample++) for(int i = 0; i < output_size; i++) input_gradients(i_sample, i % input_size ) += weights[i] * output_gradients(i_sample, i ); // Update bias. if( !no_bias ) { resizeOnes(n); transposeProductScaleAcc(bias, output_gradients, ones, -avg_lr, real(1)); } // Update weights. for(int i_sample = 0; i_sample < outputs.length() ;i_sample++) for(int i = 0; i < output_size; i++ ) { weights[i] -= avg_lr * l2_scaling * output_gradients(i_sample, i) * inputs(i_sample, i % input_size); if( between_0_and_1 ) { if( weights[i] > 1. ) weights[i] = 1.; if( weights[i] < 0. ) weights[i] = 0.; } } // Apply L1 penalty if needed (note: this is not very efficient). if (L1_penalty_factor > 0) { real delta_L1 = learning_rate * L1_penalty_factor; for( int i=0; i<output_size; i++ ) { if( weights[i] > delta_L1 ) weights[i] -= delta_L1; else if( weights[i] < -delta_L1 ) weights[i] += delta_L1; else weights[i] = 0.; } } step_number += n; }
void PLearn::LinearFilterModule::bpropUpdate | ( | const Vec & | input, |
const Vec & | output, | ||
const Vec & | output_gradient | ||
) | [virtual] |
SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient) Adapt based on the output gradient: this method should only be called just after a corresponding fprop; it should be called with the same arguments as fprop for the first two arguments (and output should not have been modified since then).
Since sub-classes are supposed to learn ONLINE, the object is 'ready-to-be-used' just after any bpropUpdate. N.B. The DEFAULT IMPLEMENTATION JUST CALLS bpropUpdate(input, output, input_gradient, output_gradient) AND IGNORES INPUT GRADIENT.
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 110 of file LinearFilterModule.cc.
References between_0_and_1, bias, decrease_constant, i, PLearn::OnlineLearningModule::input_size, L1_penalty_factor, L2_penalty_factor, learning_rate, no_bias, PLearn::OnlineLearningModule::output_size, PLASSERT_MSG, PLWARNING, PLearn::TVec< T >::size(), start_learning_rate, step_number, and weights.
Referenced by bbpropUpdate().
{ PLASSERT_MSG( input.size() == input_size, "input.size() should be equal to this->input_size" ); PLASSERT_MSG( output.size() == output_size, "output.size() should be equal to this->output_size" ); PLASSERT_MSG( output_gradient.size() == output_size, "output_gradient.size() should be equal to this->output_size" ); learning_rate = start_learning_rate / (1+decrease_constant*step_number); for( int i=0; i<output_size; i++ ) { real og_i = output_gradient[i]; real delta_L1 = learning_rate * L1_penalty_factor; real delta_L2 = learning_rate * L2_penalty_factor; if( delta_L2 > 1 ) PLWARNING("LinearFilterModule::bpropUpdate:\n" "learning rate = %f is too large!\n", learning_rate); real lr_og_i = learning_rate * og_i; if( !no_bias ) bias[i] -= lr_og_i; if( delta_L2 > 0. ) weights[i] *= (1 - delta_L2); weights[i] -= input[i % input_size] * lr_og_i; if( delta_L1 > 0. ) { if( weights[i] > delta_L1 ) weights[i] -= delta_L1; else if( weights[i] < -delta_L1 ) weights[i] += delta_L1; else weights[i] = 0.; } if( between_0_and_1 ) { if( weights[i] > 1. ) weights[i] = 1.; if( weights[i] < 0. ) weights[i] = 0.; } } step_number++; }
void PLearn::LinearFilterModule::build | ( | ) | [virtual] |
Post-constructor.
The normal implementation should call simply inherited::build(), then this class's build_(). This method should be callable again at later times, after modifying some option fields to change the "architecture" of the object.
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 389 of file LinearFilterModule.cc.
References PLearn::OnlineLearningModule::build(), and build_().
{ inherited::build(); build_(); }
void PLearn::LinearFilterModule::build_ | ( | ) | [private] |
This does the actual building.
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 477 of file LinearFilterModule.cc.
References bias, forget(), PLearn::OnlineLearningModule::input_size, PLearn::TVec< T >::length(), PLearn::OnlineLearningModule::output_size, PLERROR, PLearn::TVec< T >::size(), and weights.
Referenced by build().
{ if( input_size < 0 ) // has not been initialized return; if( output_size < 0 ) PLERROR("LinearFilterModule::build_: 'output_size' is < 0 (%i),\n" " you should set it to a positive integer (the number of" " neurons).\n", output_size); if( weights.length() != output_size || bias.size() != output_size ) { forget(); } }
string PLearn::LinearFilterModule::classname | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 60 of file LinearFilterModule.cc.
void PLearn::LinearFilterModule::declareOptions | ( | OptionList & | ol | ) | [static, protected] |
Declares the class options.
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 412 of file LinearFilterModule.cc.
References between_0_and_1, bias, PLearn::OptionBase::buildoption, PLearn::declareOption(), PLearn::OnlineLearningModule::declareOptions(), decrease_constant, init_bias, init_weights, init_weights_random_scale, L1_penalty_factor, L2_penalty_factor, PLearn::OptionBase::learntoption, no_bias, start_learning_rate, and weights.
{ declareOption(ol, "start_learning_rate", &LinearFilterModule::start_learning_rate, OptionBase::buildoption, "Learning-rate of stochastic gradient optimization"); declareOption(ol, "decrease_constant", &LinearFilterModule::decrease_constant, OptionBase::buildoption, "Decrease constant of stochastic gradient optimization"); declareOption(ol, "init_weights", &LinearFilterModule::init_weights, OptionBase::buildoption, "Optional initial weights of the neurons (one row per neuron).\n" "If not provided then weights are initialized according to a uniform\n" "distribution (see init_weights_random_scale)\n"); declareOption(ol, "init_bias", &LinearFilterModule::init_bias, OptionBase::buildoption, "Optional initial bias of the neurons. If not provided, they are set to 0.\n"); declareOption(ol, "init_weights_random_scale", &LinearFilterModule::init_weights_random_scale, OptionBase::buildoption, "If init_weights is not provided, the weights are initialized randomly\n" "from a uniform in [-r,r], with r = init_weights_random_scale/input_size.\n" "To clear the weights initially, just set this option to 0."); declareOption(ol, "L1_penalty_factor", &LinearFilterModule::L1_penalty_factor, OptionBase::buildoption, "Optional (default=0) factor of L1 regularization term, i.e.\n" "minimize L1_penalty_factor * sum_{ij} |weights(i,j)| during training.\n"); declareOption(ol, "L2_penalty_factor", &LinearFilterModule::L2_penalty_factor, OptionBase::buildoption, "Optional (default=0) factor of L2 regularization term, i.e.\n" "minimize 0.5 * L2_penalty_factor * sum_{ij} weights(i,j)^2 during training.\n"); declareOption(ol, "no_bias", &LinearFilterModule::no_bias, OptionBase::buildoption, "Wether or not to add biases.\n"); declareOption(ol, "between_0_and_1", &LinearFilterModule::between_0_and_1, OptionBase::buildoption, "Should all weights stay between 0 and 1.\n"); declareOption(ol, "weights", &LinearFilterModule::weights, OptionBase::learntoption, "Input weights of the neurons (one weight per neuron)"); declareOption(ol, "bias", &LinearFilterModule::bias, OptionBase::learntoption, "Bias of the neurons"); inherited::declareOptions(ol); }
static const PPath& PLearn::LinearFilterModule::declaringFile | ( | ) | [inline, static] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 149 of file LinearFilterModule.h.
:
LinearFilterModule * PLearn::LinearFilterModule::deepCopy | ( | CopiesMap & | copies | ) | const [virtual] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 60 of file LinearFilterModule.cc.
void PLearn::LinearFilterModule::forget | ( | ) | [virtual] |
reset the parameters to the state they would be BEFORE starting training.
Note that this method is necessarily called from build().
Implements PLearn::OnlineLearningModule.
Definition at line 339 of file LinearFilterModule.cc.
References bias, PLearn::TVec< T >::clear(), init_bias, init_weights, init_weights_random_scale, PLearn::OnlineLearningModule::input_size, learning_rate, PLearn::TVec< T >::length(), no_bias, PLearn::OnlineLearningModule::output_size, PLERROR, PLearn::OnlineLearningModule::random_gen, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), PLearn::sqrt(), start_learning_rate, step_number, and weights.
Referenced by build_().
{ learning_rate = start_learning_rate; step_number = 0; bias.resize( output_size ); if( init_bias.size() > 0 ) { if( init_bias.size() != output_size ) PLERROR( "init_bias (%d) should have length equal to output_size (%d)", init_bias.size(), output_size ); bias << init_bias; } else bias.clear(); if( no_bias ) bias.clear(); weights.resize( output_size ); if( init_weights.size() > 0 ) { if( init_weights.length() != output_size ) PLERROR( "init_weights (%d) should have size equal to (output_size) (%d)", init_weights.length(), output_size ); weights << init_weights; } else if( init_weights_random_scale < 0. ) { real r = - init_weights_random_scale / sqrt( (real)input_size ); random_gen->fill_random_uniform(weights, 1.-r, 1.); } else { real r = init_weights_random_scale / sqrt( (real)input_size ); random_gen->fill_random_uniform(weights, 0., r); } }
Overridden.
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 91 of file LinearFilterModule.cc.
References bias, PLearn::externalProductAcc(), i, PLearn::OnlineLearningModule::input_size, PLearn::TMat< T >::length(), n, ones, PLearn::OnlineLearningModule::output_size, PLASSERT, PLearn::TMat< T >::resize(), resizeOnes(), weights, and PLearn::TMat< T >::width().
{ PLASSERT( inputs.width() == input_size ); int n = inputs.length(); outputs.resize(n, output_size); for(int is=0;is<n;is++) for(int i=0;i<output_size;i++) outputs(is,i) = weights[i] * inputs(is, i % input_size); // Add bias. resizeOnes(n); externalProductAcc(outputs, ones, bias); // could be more efficient, but not critical }
given the input, compute the output (possibly resize it appropriately) SOON TO BE DEPRECATED, USE fprop(const TVec<Mat*>& ports_value)
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 79 of file LinearFilterModule.cc.
References bias, i, PLearn::OnlineLearningModule::input_size, PLearn::OnlineLearningModule::output_size, PLASSERT_MSG, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), and weights.
{ PLASSERT_MSG( input.size() == input_size, "input.size() should be equal to this->input_size" ); output.resize( output_size ); // Applies linear transformation for( int i=0 ; i<output_size ; i++ ) output[i] = weights[i] * input[i % input_size] + bias[i]; }
OptionList & PLearn::LinearFilterModule::getOptionList | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 60 of file LinearFilterModule.cc.
OptionMap & PLearn::LinearFilterModule::getOptionMap | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 60 of file LinearFilterModule.cc.
RemoteMethodMap & PLearn::LinearFilterModule::getRemoteMethodMap | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 60 of file LinearFilterModule.cc.
void PLearn::LinearFilterModule::makeDeepCopyFromShallowCopy | ( | CopiesMap & | copies | ) | [virtual] |
Transforms a shallow copy into a deep copy.
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 398 of file LinearFilterModule.cc.
References bias, PLearn::deepCopyField(), init_bias, init_weights, PLearn::OnlineLearningModule::makeDeepCopyFromShallowCopy(), ones, and weights.
{ inherited::makeDeepCopyFromShallowCopy(copies); deepCopyField(init_weights, copies); deepCopyField(init_bias, copies); deepCopyField(weights, copies); deepCopyField(bias, copies); deepCopyField(ones, copies); }
void PLearn::LinearFilterModule::resizeOnes | ( | int | n | ) | [private] |
Resize vector 'ones'.
Definition at line 497 of file LinearFilterModule.cc.
References PLearn::TVec< T >::fill(), PLearn::TVec< T >::length(), n, ones, and PLearn::TVec< T >::resize().
Referenced by bpropUpdate(), and fprop().
{ if (ones.length() < n) { ones.resize(n); ones.fill(1); } else if (ones.length() > n) ones.resize(n); }
void PLearn::LinearFilterModule::setLearningRate | ( | real | dynamic_learning_rate | ) | [virtual] |
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 379 of file LinearFilterModule.cc.
References start_learning_rate, and step_number.
{ start_learning_rate = dynamic_learning_rate; step_number = 0; // learning_rate will automatically be set in bpropUpdate() }
Reimplemented from PLearn::OnlineLearningModule.
Definition at line 149 of file LinearFilterModule.h.
Definition at line 102 of file LinearFilterModule.h.
Referenced by bpropUpdate(), and declareOptions().
The bias.
Definition at line 99 of file LinearFilterModule.h.
Referenced by bpropUpdate(), build_(), declareOptions(), forget(), fprop(), and makeDeepCopyFromShallowCopy().
learning_rate = start_learning_rate / (1 + decrease_constant*t), where t is the number of updates since the beginning
Definition at line 77 of file LinearFilterModule.h.
Referenced by bpropUpdate(), and declareOptions().
Optional initial bias of the neurons.
Definition at line 83 of file LinearFilterModule.h.
Referenced by declareOptions(), forget(), and makeDeepCopyFromShallowCopy().
Optional initial weights of the neurons (one row per neuron).
Definition at line 80 of file LinearFilterModule.h.
Referenced by declareOptions(), forget(), and makeDeepCopyFromShallowCopy().
If init_weights is not provided, the weights are initialized randomly from a uniform in [-r,r], with r = init_weights_random_scale/input_size.
Definition at line 87 of file LinearFilterModule.h.
Referenced by declareOptions(), and forget().
Optional (default=0) factor of L1 regularization term.
Definition at line 90 of file LinearFilterModule.h.
Referenced by bpropUpdate(), and declareOptions().
Optional (default=0) factor of L2 regularization term.
Definition at line 93 of file LinearFilterModule.h.
Referenced by bpropUpdate(), and declareOptions().
Definition at line 183 of file LinearFilterModule.h.
Referenced by bpropUpdate(), and forget().
Definition at line 101 of file LinearFilterModule.h.
Referenced by bpropUpdate(), declareOptions(), and forget().
Vec PLearn::LinearFilterModule::ones [protected] |
A vector filled with all ones.
Definition at line 160 of file LinearFilterModule.h.
Referenced by bpropUpdate(), fprop(), makeDeepCopyFromShallowCopy(), and resizeOnes().
Starting learning-rate, by which we multiply the gradient step.
Definition at line 73 of file LinearFilterModule.h.
Referenced by bpropUpdate(), declareOptions(), forget(), and setLearningRate().
int PLearn::LinearFilterModule::step_number [private] |
Definition at line 184 of file LinearFilterModule.h.
Referenced by bpropUpdate(), forget(), and setLearningRate().
The weights, one neuron per line.
Definition at line 96 of file LinearFilterModule.h.
Referenced by bpropUpdate(), build_(), declareOptions(), forget(), fprop(), and makeDeepCopyFromShallowCopy().