PLearn 0.1
|
Single layer of a neural network, with acceleration tricks. More...
#include <FNetLayerVariable.h>
Public Member Functions | |
FNetLayerVariable () | |
Default constructor for persistence. | |
FNetLayerVariable (Var inputs, Var weights, Var biases, Var inhibition_weights, bool _inhibit_next_units=true, bool _normalize_inputs=true, bool _backprop_to_inputs=false, real _exp_moving_average_coefficient=0.001, real _average_error_fraction_to_threshold=0.5) | |
virtual string | classname () const |
virtual OptionList & | getOptionList () const |
virtual OptionMap & | getOptionMap () const |
virtual RemoteMethodMap & | getRemoteMethodMap () const |
virtual FNetLayerVariable * | deepCopy (CopiesMap &copies) const |
virtual void | build () |
Post-constructor. | |
virtual void | makeDeepCopyFromShallowCopy (CopiesMap &copies) |
Does the necessary operations to transform a shallow copy (this) into a deep copy by deep-copying all the members that need to be. | |
virtual void | recomputeSize (int &l, int &w) const |
Recomputes the length l and width w that this variable should have, according to its parent variables. | |
virtual void | fprop () |
compute output given input | |
virtual void | bprop () |
Static Public Member Functions | |
static string | _classname_ () |
FNetLayerVariable. | |
static OptionList & | _getOptionList_ () |
static RemoteMethodMap & | _getRemoteMethodMap_ () |
static Object * | _new_instance_for_typemap_ () |
static bool | _isa_ (const Object *o) |
static void | _static_initialize_ () |
static const PPath & | declaringFile () |
Public Attributes | |
real | c1_ |
OPTIONS. | |
real | c2_ |
int | n_inputs |
int | n_hidden |
int | minibatch_size |
bool | inhibit_next_units |
bool | inhibit_by_sum |
bool | squashed_inhibition |
bool | normalize_inputs |
bool | backprop_to_inputs |
real | exp_moving_average_coefficient |
real | average_error_fraction_to_threshold |
real | min_stddev |
Static Public Attributes | |
static StaticInitializer | _static_initializer_ |
Static Protected Member Functions | |
static void | declareOptions (OptionList &ol) |
Default constructor for persistence. | |
Private Types | |
typedef NaryVariable | inherited |
Private Member Functions | |
void | build_ () |
Object-specific post-constructor. | |
Private Attributes | |
Mat | mu |
INTERNAL LEARNED PARAMETERS. | |
Mat | invs |
real | gradient_threshold |
Mat | mu2 |
INTERNAL COMPUTATION. | |
real | avg_act_gradient |
bool | no_bprop_has_been_done |
TVec< Mat > | u |
Mat | inh |
Mat | cum_inh |
Single layer of a neural network, with acceleration tricks.
The two inputs are (1) the input of the layer (n_inputs x minibatch_size) and (2) the weights and biases (layer_size x (n_inputs + 1)).
Definition at line 54 of file FNetLayerVariable.h.
typedef NaryVariable PLearn::FNetLayerVariable::inherited [private] |
Reimplemented from PLearn::NaryVariable.
Definition at line 56 of file FNetLayerVariable.h.
PLearn::FNetLayerVariable::FNetLayerVariable | ( | ) |
Default constructor for persistence.
Definition at line 94 of file FNetLayerVariable.cc.
References avg_act_gradient.
: c1_(0), c2_(0), n_inputs(-1), // MUST BE SPECIFIED BY THE USER n_hidden(-1), // MUST BE SPECIFIED BY THE USER minibatch_size(1), inhibit_next_units(true), inhibit_by_sum(false), squashed_inhibition(true), normalize_inputs(true), backprop_to_inputs(false), exp_moving_average_coefficient(0.001), average_error_fraction_to_threshold(0.5), min_stddev(1e-2) { avg_act_gradient = -1; }
PLearn::FNetLayerVariable::FNetLayerVariable | ( | Var | inputs, |
Var | weights, | ||
Var | biases, | ||
Var | inhibition_weights, | ||
bool | _inhibit_next_units = true , |
||
bool | _normalize_inputs = true , |
||
bool | _backprop_to_inputs = false , |
||
real | _exp_moving_average_coefficient = 0.001 , |
||
real | _average_error_fraction_to_threshold = 0.5 |
||
) |
Definition at line 112 of file FNetLayerVariable.cc.
References avg_act_gradient, and build_().
: inherited(inputs & weights & biases & inhibition_weights, inputs->length(), weights->length()), c1_(0), c2_(0), n_inputs(inputs->matValue.width()), n_hidden(weights->matValue.length()), minibatch_size(inputs->matValue.length()), inhibit_next_units(_inhibit_next_units), inhibit_by_sum(false), squashed_inhibition(true), normalize_inputs(_normalize_inputs), backprop_to_inputs(_backprop_to_inputs), exp_moving_average_coefficient(_exp_moving_average_coefficient), average_error_fraction_to_threshold(_average_error_fraction_to_threshold), min_stddev(1e-2) { avg_act_gradient = -1; build_(); }
string PLearn::FNetLayerVariable::_classname_ | ( | ) | [static] |
Reimplemented from PLearn::NaryVariable.
Definition at line 92 of file FNetLayerVariable.cc.
OptionList & PLearn::FNetLayerVariable::_getOptionList_ | ( | ) | [static] |
Reimplemented from PLearn::NaryVariable.
Definition at line 92 of file FNetLayerVariable.cc.
RemoteMethodMap & PLearn::FNetLayerVariable::_getRemoteMethodMap_ | ( | ) | [static] |
Reimplemented from PLearn::NaryVariable.
Definition at line 92 of file FNetLayerVariable.cc.
Reimplemented from PLearn::NaryVariable.
Definition at line 92 of file FNetLayerVariable.cc.
Object * PLearn::FNetLayerVariable::_new_instance_for_typemap_ | ( | ) | [static] |
Reimplemented from PLearn::Object.
Definition at line 92 of file FNetLayerVariable.cc.
StaticInitializer FNetLayerVariable::_static_initializer_ & PLearn::FNetLayerVariable::_static_initialize_ | ( | ) | [static] |
Reimplemented from PLearn::NaryVariable.
Definition at line 92 of file FNetLayerVariable.cc.
void PLearn::FNetLayerVariable::bprop | ( | ) | [virtual] |
Implements PLearn::Variable.
Definition at line 342 of file FNetLayerVariable.cc.
References average_error_fraction_to_threshold, avg_act_gradient, backprop_to_inputs, c1_, c2_, PLearn::computeInverseStandardDeviationFromMeanAndSquareMean(), cum_inh, exp_moving_average_coefficient, PLearn::exponentialMovingAverageUpdate(), PLearn::exponentialMovingSquareUpdate(), PLearn::fast_exact_is_equal(), gradient_threshold, PLearn::Variable::gradientdata, i, inh, inhibit_by_sum, inhibit_next_units, invs, j, PLearn::Variable::matGradient, PLearn::Variable::matValue, min_stddev, minibatch_size, PLearn::TMat< T >::mod(), mu, mu2, PLearn::multiplyAcc(), n_hidden, n_inputs, normalize_inputs, squashed_inhibition, u, PLearn::Variable::valuedata, PLearn::NaryVariable::varray, and x.
{ real* x = varray[0]->valuedata; real* dx = varray[0]->gradientdata; real* y = valuedata; real* dy = gradientdata; real c1 = varray[3]->valuedata[0]; real c2 = varray[3]->valuedata[1]; real* db = varray[2]->gradientdata; real& dc1 = varray[3]->gradientdata[0]; real& dc2 = varray[3]->gradientdata[1]; int mx=varray[0]->matValue.mod(); int mdx = varray[0]->matGradient.mod(); int my=matValue.mod(); int mdy = matGradient.mod(); for (int k=0;k<minibatch_size;k++, x+=mx, y+=my, dx+=mdx, dy+=mdy) { Mat u_k = u[k]; real* inh_k = inh[k]; real* cum_inh_k = cum_inh[k]; real dcum_s = 0; Vec xk = varray[0]->matValue(k); Vec dxk = varray[0]->matGradient(k); for (int i=n_hidden-1;i>=0;i--) { real dai = (dy[i]+dcum_s)*y[i]*(1-y[i]); real erri = fabs(dai); avg_act_gradient = (1 - exp_moving_average_coefficient)*avg_act_gradient + exp_moving_average_coefficient * erri; if (erri > gradient_threshold) { real* dWi = varray[1]->matGradient[i]; if (normalize_inputs) { real* u_ki = u_k[i]; for (int j=0;j<n_inputs;j++) dWi[j] += dai * u_ki[j]; Vec mu_i = mu(i); Vec mu2_i = mu2(i); exponentialMovingAverageUpdate(mu_i, xk, exp_moving_average_coefficient); exponentialMovingSquareUpdate(mu2_i, xk, exp_moving_average_coefficient); } else for (int j=0;j<n_inputs;j++) dWi[j] += dai * x[j]; db[i] += dai; if (inhibit_next_units && i>0) { real inh_ki = inh_k[i]; if (!fast_exact_is_equal(c1_, 0)) // c1 is optimized. dc1 -= dai * inh_ki; if (squashed_inhibition) { real dinh_ki = - dai * c1 * inh_ki * (1 - inh_ki); if (!fast_exact_is_equal(c2_, 0)) // c2 is optimized. dc2 += dinh_ki * cum_inh_k[i]; if (inhibit_by_sum) dcum_s += dinh_ki * c2; else dcum_s += dinh_ki * c2 / i; } else { real dinh_ki = - dai * c1; if (inhibit_by_sum) dcum_s += dinh_ki; else dcum_s += dinh_ki / i; } } if (backprop_to_inputs) { Vec Wi = varray[1]->matValue(i); multiplyAcc(dxk,Wi,dai); } } } } if (normalize_inputs) // invs = 1/ sqrt(mu2 - mu*mu) computeInverseStandardDeviationFromMeanAndSquareMean(invs,mu,mu2, min_stddev, min_stddev); gradient_threshold = average_error_fraction_to_threshold * avg_act_gradient; }
void PLearn::FNetLayerVariable::build | ( | ) | [virtual] |
Post-constructor.
The normal implementation should call simply inherited::build(), then this class's build_(). This method should be callable again at later times, after modifying some option fields to change the "architecture" of the object.
Reimplemented from PLearn::NaryVariable.
Definition at line 143 of file FNetLayerVariable.cc.
References PLearn::NaryVariable::build(), and build_().
{ inherited::build(); build_(); }
void PLearn::FNetLayerVariable::build_ | ( | ) | [private] |
Object-specific post-constructor.
This method should be redefined in subclasses and do the actual building of the object according to previously set option fields. Constructors can just set option fields, and then call build_. This method is NOT virtual, and will typically be called only from three places: a constructor, the public virtual build()
method, and possibly the public virtual read method (which calls its parent's read). build_()
can assume that its parent's build_()
has already been called.
Reimplemented from PLearn::NaryVariable.
Definition at line 150 of file FNetLayerVariable.cc.
References avg_act_gradient, c1_, c2_, PLearn::TMat< T >::clear(), cum_inh, PLearn::fast_exact_is_equal(), PLearn::TVec< T >::fill(), PLearn::TMat< T >::fill(), PLearn::fill_random_uniform(), gradient_threshold, i, inh, invs, PLearn::TMat< T >::length(), PLearn::Variable::length(), PLearn::TVec< T >::length(), PLearn::Variable::matValue, minibatch_size, mu, mu2, n_hidden, n_inputs, no_bprop_has_been_done, normalize_inputs, PLERROR, PLWARNING, PLearn::Variable::resize(), PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), PLearn::Variable::size(), PLearn::TVec< T >::size(), u, PLearn::Variable::Var, PLearn::NaryVariable::varray, PLearn::TMat< T >::width(), and PLearn::Variable::width().
Referenced by build(), and FNetLayerVariable().
{ if (varray.length() == 0 && n_inputs == -1) // Cannot do anything yet. return; if ( varray.size() != 4 || n_hidden != varray[1].length() || n_inputs != varray[1].width() ) { varray.resize(4); if (varray[0]) n_inputs = varray[0]->width(); // Get n_inputs from first var if present. varray[1] = Var(n_hidden,n_inputs); varray[2] = Var(n_hidden); varray[3] = Var(2); } if (varray[0]) { if (n_inputs != varray[0]->width()) PLERROR("In FNetLayerVariable: input var 0 should have width = %d = n_inputs, but is %d\n",n_inputs, varray[0]->width()); if (n_hidden != varray[1]->length()) PLERROR("In FNetLayerVariable: input var 1 should have length = %d = n_hidden, but is %d\n",n_hidden, varray[1]->length()); if (minibatch_size != varray[0]->length()) PLERROR("In FNetLayerVariable: input var 0 should have length = %d = minibatch_size, but is %d\n",minibatch_size, varray[0]->length()); if (n_inputs != varray[1]->width()) PLERROR("In FNetLayerVariable: the size of inputs and weights are not compatible for an affine application of weights on inputs"); if (varray[2]->size() != n_hidden) PLERROR("In FNetLayerVariable: the biases vector should have the same length as the weights matrix number of rows."); if (normalize_inputs && (mu.length() != n_hidden || mu.width() != n_inputs)) { mu.resize(n_hidden, n_inputs); mu.clear(); invs.resize(n_hidden, n_inputs); invs.fill(1.0); mu2.resize(n_hidden, n_inputs); mu2.fill(0); } else // TODO Remove later, this is just a safety check. PLWARNING("In FNetLayerVariable::build_ - Using previously saved normalization parameters"); inh.resize(minibatch_size, n_hidden); cum_inh.resize(minibatch_size, n_hidden); u.resize(minibatch_size); if (normalize_inputs) for (int i=0;i<minibatch_size;i++) u[i].resize(n_hidden,n_inputs); no_bprop_has_been_done = true; gradient_threshold = 0; if (avg_act_gradient < 0) avg_act_gradient = 0.0; // Initialize parameters. real delta = real(1.0 / n_inputs); fill_random_uniform(varray[1]->matValue, -delta, delta); varray[2]->matValue.fill(0.0); varray[3]->matValue.fill(1.0); if (!fast_exact_is_equal(c1_, 0)) varray[3]->value[0] = c1_; if (!fast_exact_is_equal(c2_, 0)) varray[3]->value[1] = c2_; // Set correct sizes. resize(minibatch_size, n_hidden); } }
string PLearn::FNetLayerVariable::classname | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 92 of file FNetLayerVariable.cc.
void PLearn::FNetLayerVariable::declareOptions | ( | OptionList & | ol | ) | [static, protected] |
Default constructor for persistence.
Reimplemented from PLearn::NaryVariable.
Definition at line 214 of file FNetLayerVariable.cc.
References average_error_fraction_to_threshold, avg_act_gradient, backprop_to_inputs, PLearn::OptionBase::buildoption, c1_, c2_, PLearn::declareOption(), PLearn::NaryVariable::declareOptions(), exp_moving_average_coefficient, inhibit_by_sum, inhibit_next_units, PLearn::OptionBase::learntoption, min_stddev, minibatch_size, mu, n_hidden, n_inputs, normalize_inputs, and squashed_inhibition.
{ declareOption(ol, "n_inputs", &FNetLayerVariable::n_inputs, OptionBase::buildoption, " Number of inputs of the layer, for each element of the mini-batch.\n"); declareOption(ol, "n_hidden", &FNetLayerVariable::n_hidden, OptionBase::buildoption, " Number of outputs of the layer (hidden units), for each element of the mini-batch.\n"); declareOption(ol, "minibatch_size", &FNetLayerVariable::minibatch_size, OptionBase::buildoption, " Number of elements of each mini-batch.\n"); declareOption(ol, "inhibit_next_units", &FNetLayerVariable::inhibit_next_units, OptionBase::buildoption, " If true then activation of unit i contains minus the sum of the outputs of\n" " all units j for j<i, i.e. y[k,i] = sigmoid(W (u[k,i] 1) - 1_{inhibit_next_units} sum_{j<i} y[k,j]).\n"); declareOption(ol, "inhibit_by_sum", &FNetLayerVariable::inhibit_by_sum, OptionBase::buildoption, " If true, then the inhibition will be based on the sum of the previous units'\n" " activations, instead of their average."); declareOption(ol, "squashed_inhibition", &FNetLayerVariable::squashed_inhibition, OptionBase::buildoption, " If true, then the inhibition will be squashed by a sigmoid (if false, c2 is not used)."); declareOption(ol, "normalize_inputs", &FNetLayerVariable::normalize_inputs, OptionBase::buildoption, " If true, then normalized input u[k,i]=(x[k] - mu[i])*invs[i], otherwise u[k,i]=x[k].\n" " mu[i,j] is a moving average of the x[k,j]'s when |dC/da[k,i]| is above gradient_threshold.\n" " Similarly, mu2[i,j] is a moving average of x[k,j]*x[k,j] when |dC/da[k,i]| is above gradient_threshold\n" " and invs[i,j] = 1/sqrt(mu2[i,j] - mu[i,j]*mu[i,j]). The moving averages are exponential moving\n" " averages with coefficient exp_moving_average_coefficient.\n"); declareOption(ol, "min_stddev", &FNetLayerVariable::min_stddev, OptionBase::buildoption, "Used only when 'normalize_inputs' is true, any input whose standard deviation is less than this value\n" "will be considered as having this standard deviation (prevents numerical problems with constant inputs)."); declareOption(ol, "backprop_to_inputs", &FNetLayerVariable::backprop_to_inputs, OptionBase::buildoption, " If true then gradient is propagated to the inputs. When this object is the first layer\n" " of a neural network, it is more efficient to set this option to false (which is its default).\n"); declareOption(ol, "exp_moving_average_coefficient", &FNetLayerVariable::exp_moving_average_coefficient, OptionBase::buildoption, " The moving average coefficient used in updating mu, var and gradient_threshold, with\n" " updates of the form\n" " newvalue = (1 - exp_moving_average_coefficient)*oldvalue + exp_moving_average_coefficient*summand\n" " in order to obtain a moving average of the summands.\n"); declareOption(ol, "average_error_fraction_to_threshold", &FNetLayerVariable::average_error_fraction_to_threshold, OptionBase::buildoption, " The fraction of the average of |dC/da[k,i]| that determines the gradient_threshold.\n"); declareOption(ol, "c1", &FNetLayerVariable::c1_, OptionBase::buildoption, " Fixed coefficient c1. '0' means it will be optimized, starting from 1.\n"); declareOption(ol, "c2", &FNetLayerVariable::c2_, OptionBase::buildoption, " Fixed coefficient c2. '0' means it will be optimized, starting from 1.\n"); // Learnt options. declareOption(ol, "avg_act_gradient", &FNetLayerVariable::avg_act_gradient, OptionBase::learntoption, "The exponential moving average of the absolute value of the gradient."); declareOption(ol, "mu", &FNetLayerVariable::mu, OptionBase::learntoption, "The centers for normalization."); declareOption(ol, "mu2", &FNetLayerVariable::mu, OptionBase::learntoption, "The squared centers for computation of the variance."); declareOption(ol, "mu2", &FNetLayerVariable::mu, OptionBase::learntoption, "The normalization factors."); inherited::declareOptions(ol); }
static const PPath& PLearn::FNetLayerVariable::declaringFile | ( | ) | [inline, static] |
FNetLayerVariable * PLearn::FNetLayerVariable::deepCopy | ( | CopiesMap & | copies | ) | const [virtual] |
Reimplemented from PLearn::NaryVariable.
Definition at line 92 of file FNetLayerVariable.cc.
void PLearn::FNetLayerVariable::fprop | ( | ) | [virtual] |
compute output given input
Implements PLearn::Variable.
Definition at line 294 of file FNetLayerVariable.cc.
References b, cum_inh, PLearn::dot_product(), i, inh, inhibit_by_sum, inhibit_next_units, invs, j, PLearn::Variable::matValue, minibatch_size, PLearn::TMat< T >::mod(), mu, n_hidden, n_inputs, normalize_inputs, PLearn::sigmoid(), squashed_inhibition, u, PLearn::Variable::valuedata, PLearn::NaryVariable::varray, and x.
{ real* x = varray[0]->valuedata; real* y = valuedata; real* b = varray[2]->valuedata; real c1 = varray[3]->valuedata[0]; real c2 = varray[3]->valuedata[1]; int mx=varray[0]->matValue.mod(); int my=matValue.mod(); for (int k=0;k<minibatch_size;k++, x+=mx, y+=my) { real cum_s = 0; Mat u_k = u[k]; real* inh_k = inh[k]; real* cum_inh_k = cum_inh[k]; for (int i=0;i<n_hidden;i++) { real* Wi = varray[1]->matValue[i]; real bi = b[i]; if (inhibit_next_units && i>0) { if (inhibit_by_sum) cum_inh_k[i] = cum_s; else cum_inh_k[i] = cum_s / real(i); if (squashed_inhibition) inh_k[i] = sigmoid(c2 * cum_inh_k[i]); else inh_k[i] = cum_inh_k[i]; bi -= c1*inh_k[i]; } if (normalize_inputs) { real* mu_i = mu[i]; real* invs_i = invs[i]; real* u_ki = u_k[i]; for (int j=0;j<n_inputs;j++) u_ki[j] = (x[j] - mu_i[j])*invs_i[j]; y[i] = sigmoid(dot_product(bi,u_ki,Wi,n_inputs)); } else y[i] = sigmoid(dot_product(bi,x,Wi,n_inputs)); cum_s += y[i]; } } }
OptionList & PLearn::FNetLayerVariable::getOptionList | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 92 of file FNetLayerVariable.cc.
OptionMap & PLearn::FNetLayerVariable::getOptionMap | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 92 of file FNetLayerVariable.cc.
RemoteMethodMap & PLearn::FNetLayerVariable::getRemoteMethodMap | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 92 of file FNetLayerVariable.cc.
void PLearn::FNetLayerVariable::makeDeepCopyFromShallowCopy | ( | CopiesMap & | copies | ) | [virtual] |
Does the necessary operations to transform a shallow copy (this) into a deep copy by deep-copying all the members that need to be.
This needs to be overridden by every class that adds "complex" data members to the class, such as Vec
, Mat
, PP<Something>
, etc. Typical implementation:
void CLASS_OF_THIS::makeDeepCopyFromShallowCopy(CopiesMap& copies) { inherited::makeDeepCopyFromShallowCopy(copies); deepCopyField(complex_data_member1, copies); deepCopyField(complex_data_member2, copies); ... }
copies | A map used by the deep-copy mechanism to keep track of already-copied objects. |
Reimplemented from PLearn::NaryVariable.
Definition at line 425 of file FNetLayerVariable.cc.
References cum_inh, PLearn::deepCopyField(), inh, invs, PLearn::NaryVariable::makeDeepCopyFromShallowCopy(), mu, mu2, and u.
{ inherited::makeDeepCopyFromShallowCopy(copies); deepCopyField(mu, copies); deepCopyField(invs, copies); deepCopyField(mu2, copies); deepCopyField(u, copies); deepCopyField(inh, copies); deepCopyField(cum_inh, copies); }
Recomputes the length l and width w that this variable should have, according to its parent variables.
This is used for ex. by sizeprop() The default version stupidly returns the current dimensions, so make sure to overload it in subclasses if this is not appropriate.
Reimplemented from PLearn::Variable.
Definition at line 285 of file FNetLayerVariable.cc.
References PLearn::TVec< T >::length(), and PLearn::NaryVariable::varray.
{ if (varray.length() >= 2 && varray[0] && varray[1]) { l = varray[0]->length(); w = varray[1]->length(); } else l = w = 0; }
Reimplemented from PLearn::NaryVariable.
Definition at line 99 of file FNetLayerVariable.h.
Definition at line 85 of file FNetLayerVariable.h.
Referenced by bprop(), and declareOptions().
Definition at line 65 of file FNetLayerVariable.h.
Referenced by bprop(), build_(), declareOptions(), and FNetLayerVariable().
Definition at line 83 of file FNetLayerVariable.h.
Referenced by bprop(), and declareOptions().
OPTIONS.
Definition at line 74 of file FNetLayerVariable.h.
Referenced by bprop(), build_(), and declareOptions().
Definition at line 75 of file FNetLayerVariable.h.
Referenced by bprop(), build_(), and declareOptions().
Mat PLearn::FNetLayerVariable::cum_inh [private] |
Definition at line 69 of file FNetLayerVariable.h.
Referenced by bprop(), build_(), fprop(), and makeDeepCopyFromShallowCopy().
Definition at line 84 of file FNetLayerVariable.h.
Referenced by bprop(), and declareOptions().
Definition at line 61 of file FNetLayerVariable.h.
Mat PLearn::FNetLayerVariable::inh [private] |
Definition at line 68 of file FNetLayerVariable.h.
Referenced by bprop(), build_(), fprop(), and makeDeepCopyFromShallowCopy().
Definition at line 80 of file FNetLayerVariable.h.
Referenced by bprop(), declareOptions(), and fprop().
Definition at line 79 of file FNetLayerVariable.h.
Referenced by bprop(), declareOptions(), and fprop().
Mat PLearn::FNetLayerVariable::invs [private] |
Definition at line 60 of file FNetLayerVariable.h.
Referenced by bprop(), build_(), fprop(), and makeDeepCopyFromShallowCopy().
Definition at line 86 of file FNetLayerVariable.h.
Referenced by bprop(), and declareOptions().
Definition at line 78 of file FNetLayerVariable.h.
Referenced by bprop(), build_(), declareOptions(), and fprop().
Mat PLearn::FNetLayerVariable::mu [private] |
INTERNAL LEARNED PARAMETERS.
Definition at line 59 of file FNetLayerVariable.h.
Referenced by bprop(), build_(), declareOptions(), fprop(), and makeDeepCopyFromShallowCopy().
Mat PLearn::FNetLayerVariable::mu2 [private] |
INTERNAL COMPUTATION.
Definition at line 64 of file FNetLayerVariable.h.
Referenced by bprop(), build_(), and makeDeepCopyFromShallowCopy().
Definition at line 77 of file FNetLayerVariable.h.
Referenced by bprop(), build_(), declareOptions(), and fprop().
Definition at line 76 of file FNetLayerVariable.h.
Referenced by bprop(), build_(), declareOptions(), and fprop().
Definition at line 66 of file FNetLayerVariable.h.
Referenced by build_().
Definition at line 82 of file FNetLayerVariable.h.
Referenced by bprop(), build_(), declareOptions(), and fprop().
Definition at line 81 of file FNetLayerVariable.h.
Referenced by bprop(), declareOptions(), and fprop().
TVec<Mat> PLearn::FNetLayerVariable::u [private] |
Definition at line 67 of file FNetLayerVariable.h.
Referenced by bprop(), build_(), fprop(), and makeDeepCopyFromShallowCopy().