PLearn 0.1
|
Compute the Negative-Log-Marginal-Likelihood for Gaussian Process Regression. More...
#include <GaussianProcessNLLVariable.h>
Public Member Functions | |
GaussianProcessNLLVariable () | |
Default constructor, usually does nothing. | |
GaussianProcessNLLVariable (Kernel *kernel, real noise, Mat inputs, Mat targets, const TVec< string > &hyperparam_names, const VarArray &hyperparam_vars, bool allow_bprop=true, bool save_gram_matrix=false, PPath expdir="") | |
Constructor initializing from input variables. | |
virtual void | recomputeSize (int &l, int &w) const |
Recomputes the length l and width w that this variable should have, according to its parent variables. | |
virtual void | fprop () |
compute output given input | |
virtual void | bprop () |
const Mat & | alpha () const |
Accessor to the last computed 'alpha' matrix in an fprop. | |
const Mat & | gram () const |
Accessor to the last computed gram matrix in an fprop. | |
const Mat & | gramInverse () const |
Accessor to the last computed gram matrix inverse in an fprop. | |
virtual string | classname () const |
virtual OptionList & | getOptionList () const |
virtual OptionMap & | getOptionMap () const |
virtual RemoteMethodMap & | getRemoteMethodMap () const |
virtual GaussianProcessNLLVariable * | deepCopy (CopiesMap &copies) const |
virtual void | build () |
Post-constructor. | |
virtual void | makeDeepCopyFromShallowCopy (CopiesMap &copies) |
Transforms a shallow copy into a deep copy. | |
Static Public Member Functions | |
static void | fbpropFragments (Kernel *kernel, real noise, const Mat &inputs, const Mat &targets, bool compute_inverse, bool save_gram_matrix, const PPath &expdir, Mat &gram, Mat &L, Mat &alpha, Mat &inv, Vec &tmpch, Mat &tmprhs) |
Compute the elements required for log-likelihood computation, fprop, and bprop. | |
static void | logVarray (const VarArray &varr, const string &title="", bool debug=false) |
Minor utility function to dump the contents of a varray to a log. | |
static string | _classname_ () |
GaussianProcessNLLVariable. | |
static OptionList & | _getOptionList_ () |
static RemoteMethodMap & | _getRemoteMethodMap_ () |
static Object * | _new_instance_for_typemap_ () |
static bool | _isa_ (const Object *o) |
static void | _static_initialize_ () |
static const PPath & | declaringFile () |
Public Attributes | |
bool | m_save_gram_matrix |
If true, the Gram matrix is saved before undergoing Cholesky decomposition; useful for debugging if the matrix is quasi-singular. | |
PPath | m_expdir |
Expdir where to save the Gram Matrix, if 'save_gram_matrix' requested. | |
Static Public Attributes | |
static StaticInitializer | _static_initializer_ |
Static Protected Member Functions | |
static void | declareOptions (OptionList &ol) |
Declares the class options. | |
Protected Attributes | |
Kernel * | m_kernel |
Current kernel we should be using. | |
real | m_noise |
Observation noise to be added to the diagonal of the Gram matrix. | |
Mat | m_inputs |
Matrix of inputs. | |
Mat | m_targets |
Matrix of regression targets. | |
TVec< string > | m_hyperparam_names |
Name of each hyperparameter contained in hyperparam_vars. | |
VarArray | m_hyperparam_vars |
Variables standing for each hyperparameter, used to accumulate the gradient w.r.t. | |
bool | m_allow_bprop |
Whether bprops are allowed. | |
Mat | m_gram |
Holds the Gram matrix. | |
Mat | m_gram_derivative |
Holds the derivative of the Gram matrix with respect to an hyperparameter. | |
Mat | m_cholesky_gram |
Holds the Cholesky decomposition of m_gram. | |
Mat | m_alpha_t |
Solution of the linear system gram*alpha = targets. | |
Mat | m_alpha_buf |
Temporary buffer to hold the transpose of m_alpha_t; used for the alpha() accessor and outside-world interface. | |
Mat | m_inverse_gram |
Inverse of the Gram matrix. | |
Vec | m_cholesky_tmp |
Temporary storage for the Cholesky decomposition. | |
Mat | m_rhs_tmp |
Temporary storage for holding the right-hand-side to be solved by Cholesky. | |
Private Types | |
typedef NaryVariable | inherited |
Private Member Functions | |
void | build_ () |
This does the actual building. |
Compute the Negative-Log-Marginal-Likelihood for Gaussian Process Regression.
* GaussianProcessNLLVariable * This Variable computes the negative-log-marginal likelihood function associated with Gaussian Process Regression (see GaussianProcessRegressor). It is primarily used to carry out hyperparameter optimization by conjugate gradient descent.
To compute both the fprop and bprop (gradient of marginal NLL w.r.t. each hyperparameter), it requires the specification of the Kernel object used, the VMatrix of inputs, the VMatrix of targets, and the variables that wrap the hyperparameter options within the Kernel object structure (presumably ObjectOptionVariable, or similar). These variables must be scalar variables. To get something like Automatic Relevance Determination, you should specify separately each Variable (in the PLearn sense) that corresponds to a given input hyperparameter.
Definition at line 68 of file GaussianProcessNLLVariable.h.
typedef NaryVariable PLearn::GaussianProcessNLLVariable::inherited [private] |
Reimplemented from PLearn::NaryVariable.
Definition at line 70 of file GaussianProcessNLLVariable.h.
PLearn::GaussianProcessNLLVariable::GaussianProcessNLLVariable | ( | ) |
Default constructor, usually does nothing.
Definition at line 74 of file GaussianProcessNLLVariable.cc.
: m_save_gram_matrix(0), m_kernel(0), m_noise(0), m_allow_bprop(true) { }
PLearn::GaussianProcessNLLVariable::GaussianProcessNLLVariable | ( | Kernel * | kernel, |
real | noise, | ||
Mat | inputs, | ||
Mat | targets, | ||
const TVec< string > & | hyperparam_names, | ||
const VarArray & | hyperparam_vars, | ||
bool | allow_bprop = true , |
||
bool | save_gram_matrix = false , |
||
PPath | expdir = "" |
||
) |
Constructor initializing from input variables.
kernel,: | the kernel to use |
noise,: | observation noise to add to the diagonal Gram matrix |
inputs,: | matrix of training inputs |
targets,: | matrix of training targets (may be multivariate) |
hyperparam_names,: | names of kernel hyperparameters w.r.t. which we should be backpropagating the NLL |
hyperparam_vars,: | PLearn Variables wrapping kernel hyperparameters |
allow_bprop,: | if true, assume we will be performing bprops on the Variable; if not, only fprops are allowed. BProps involve computing a full inverse of the Gram matrix |
save_gram_matrix,: | whether the Gram matrix should be saved (useful for debugging) |
expdir,: | where to save the Gram matrix if required |
Definition at line 82 of file GaussianProcessNLLVariable.cc.
References build().
: inherited(hyperparam_vars, 1, 1), m_save_gram_matrix(save_gram_matrix), m_expdir(expdir), m_kernel(kernel), m_noise(noise), m_inputs(inputs), m_targets(targets), m_hyperparam_names(hyperparam_names), m_hyperparam_vars(hyperparam_vars), m_allow_bprop(allow_bprop) { build(); }
string PLearn::GaussianProcessNLLVariable::_classname_ | ( | ) | [static] |
Reimplemented from PLearn::NaryVariable.
Definition at line 72 of file GaussianProcessNLLVariable.cc.
OptionList & PLearn::GaussianProcessNLLVariable::_getOptionList_ | ( | ) | [static] |
Reimplemented from PLearn::NaryVariable.
Definition at line 72 of file GaussianProcessNLLVariable.cc.
RemoteMethodMap & PLearn::GaussianProcessNLLVariable::_getRemoteMethodMap_ | ( | ) | [static] |
Reimplemented from PLearn::NaryVariable.
Definition at line 72 of file GaussianProcessNLLVariable.cc.
Reimplemented from PLearn::NaryVariable.
Definition at line 72 of file GaussianProcessNLLVariable.cc.
Object * PLearn::GaussianProcessNLLVariable::_new_instance_for_typemap_ | ( | ) | [static] |
Reimplemented from PLearn::Object.
Definition at line 72 of file GaussianProcessNLLVariable.cc.
StaticInitializer GaussianProcessNLLVariable::_static_initializer_ & PLearn::GaussianProcessNLLVariable::_static_initialize_ | ( | ) | [static] |
Reimplemented from PLearn::NaryVariable.
Definition at line 72 of file GaussianProcessNLLVariable.cc.
const Mat & PLearn::GaussianProcessNLLVariable::alpha | ( | ) | const |
Accessor to the last computed 'alpha' matrix in an fprop.
Definition at line 155 of file GaussianProcessNLLVariable.cc.
References PLearn::TMat< T >::length(), m_alpha_buf, m_alpha_t, PLearn::TMat< T >::resize(), PLearn::transpose(), and PLearn::TMat< T >::width().
{ m_alpha_buf.resize(m_alpha_t.width(), m_alpha_t.length()); transpose(m_alpha_t, m_alpha_buf); return m_alpha_buf; }
void PLearn::GaussianProcessNLLVariable::bprop | ( | ) | [virtual] |
Implements PLearn::Variable.
Definition at line 204 of file GaussianProcessNLLVariable.cc.
References PLearn::Kernel::computeGramMatrixDerivative(), PLearn::Variable::gradient, i, j, PLearn::TMat< T >::length(), m, m_allow_bprop, m_alpha_t, m_gram_derivative, m_hyperparam_names, m_hyperparam_vars, m_inverse_gram, m_kernel, n, PLASSERT, PLASSERT_MSG, PLearn::Variable::row(), PLearn::TVec< T >::size(), and PLearn::TMat< T >::width().
{ PLASSERT_MSG( m_allow_bprop, "GaussianProcessNLLVariable must be constructed with the option " "'will_bprop'=True in order to call bprop" ); PLASSERT( m_hyperparam_names.size() == m_hyperparam_vars.size() ); PLASSERT( m_alpha_t.width() == m_inverse_gram.width() ); PLASSERT( m_inverse_gram.width() == m_inverse_gram.length() ); PLASSERT( m_kernel ); // Loop over the hyperparameters in order to compute the derivative of the // gram matrix once for each hyperparameter. Then loop over the target // variables to accumulate the gradient. For each target, we must compute // // trace((K^-1 - alpha*alpha') * dK/dtheta_j) // // Since both the first term inside the trace and the derivative of the // gram matrix are symmetric square matrices, the trace is efficiently // computed as the sum of the elementwise product of those matrices. // // Don't forget that m_alpha_t is transposed. for (int j=0, m=m_hyperparam_names.size() ; j<m ; ++j) { real dnll_dj = 0; m_kernel->computeGramMatrixDerivative(m_gram_derivative, m_hyperparam_names[j]); for (int i=0, n=m_alpha_t.length() ; i<n ; ++i) { real* curalpha = m_alpha_t[i]; real cur_trace = 0.0; // Sum over all rows and columns of matrix real* curalpha_row = curalpha; for (int row=0, nrows=m_inverse_gram.length() ; row<nrows ; ++row, ++curalpha_row) { real* p_inverse_gram = m_inverse_gram[row]; real* p_gram_derivative = m_gram_derivative[row]; real curalpha_row_value = *curalpha_row; real* curalpha_col = curalpha; real row_trace = 0.0; for (int col=0 ; col <= row ; ++col, ++curalpha_col) { if (col == row) row_trace *= 2.; row_trace += (*p_inverse_gram++ - curalpha_row_value * *curalpha_col) * *p_gram_derivative++; // curtrace += // (m_inverse_gram(row,col) - curalpha(row,0)*curalpha(col,0)) // * m_gram_derivative(row,col); } cur_trace += row_trace; } dnll_dj += cur_trace / 2.0; } m_hyperparam_vars[j]->gradient[0] += dnll_dj * gradient[0]; } }
void PLearn::GaussianProcessNLLVariable::build | ( | ) | [virtual] |
Post-constructor.
The normal implementation should call simply inherited::build(), then this class's build_(). This method should be callable again at later times, after modifying some option fields to change the "architecture" of the object.
Reimplemented from PLearn::NaryVariable.
Definition at line 110 of file GaussianProcessNLLVariable.cc.
References PLearn::NaryVariable::build(), and build_().
Referenced by GaussianProcessNLLVariable().
{ inherited::build(); build_(); }
void PLearn::GaussianProcessNLLVariable::build_ | ( | ) | [private] |
This does the actual building.
Reimplemented from PLearn::NaryVariable.
Definition at line 147 of file GaussianProcessNLLVariable.cc.
References PLearn::TMat< T >::isNotNull(), m_inputs, m_kernel, m_targets, and PLASSERT.
Referenced by build().
string PLearn::GaussianProcessNLLVariable::classname | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 72 of file GaussianProcessNLLVariable.cc.
void PLearn::GaussianProcessNLLVariable::declareOptions | ( | OptionList & | ol | ) | [static, protected] |
Declares the class options.
Reimplemented from PLearn::NaryVariable.
Definition at line 135 of file GaussianProcessNLLVariable.cc.
References PLearn::OptionBase::buildoption, PLearn::declareOption(), PLearn::NaryVariable::declareOptions(), and m_save_gram_matrix.
{ declareOption( ol, "save_gram_matrix", &GaussianProcessNLLVariable::m_save_gram_matrix, OptionBase::buildoption, "If true, the Gram matrix is saved before undergoing Cholesky\n" "decomposition; useful for debugging if the matrix is quasi-singular."); // Now call the parent class' declareOptions inherited::declareOptions(ol); }
static const PPath& PLearn::GaussianProcessNLLVariable::declaringFile | ( | ) | [inline, static] |
Reimplemented from PLearn::NaryVariable.
Definition at line 161 of file GaussianProcessNLLVariable.h.
:
GaussianProcessNLLVariable * PLearn::GaussianProcessNLLVariable::deepCopy | ( | CopiesMap & | copies | ) | const [virtual] |
Reimplemented from PLearn::NaryVariable.
Definition at line 72 of file GaussianProcessNLLVariable.cc.
void PLearn::GaussianProcessNLLVariable::fbpropFragments | ( | Kernel * | kernel, |
real | noise, | ||
const Mat & | inputs, | ||
const Mat & | targets, | ||
bool | compute_inverse, | ||
bool | save_gram_matrix, | ||
const PPath & | expdir, | ||
Mat & | gram, | ||
Mat & | L, | ||
Mat & | alpha, | ||
Mat & | inv, | ||
Vec & | tmpch, | ||
Mat & | tmprhs | ||
) | [static] |
Compute the elements required for log-likelihood computation, fprop, and bprop.
Static since this is called by GaussianProcessRegressor.
[in] | kernel,: | the kernel to use |
[in] | noise,: | observation noise to add to the diagonal Gram matrix |
[in] | inputs,: | matrix of training inputs |
[in] | targets,: | matrix of training targets (may be multivariate) |
[in] | compute_inverse,: | whether to compute inverse of Gram matrix |
[in] | save_gram_matrix,: | whether to save the computed Gram matrix |
[in] | expdir,: | if saving Gram matrix, where to save it |
[out] | gram,: | The kernel (Gram) matrix |
[out] | L,: | Cholesky decomposition of the Gram matrix |
[out] | alpha,: | Solution to the linear system gram*alpha = targets |
[out] | inv,: | If required, the inverse Gram matrix |
inout] | tmpch: Temporary storage for Cholesky decomposition | |
inout] | tmprhs: Temporary storage for RHS |
Definition at line 270 of file GaussianProcessNLLVariable.cc.
References PLearn::addToDiagonal(), PLearn::Kernel::computeGramMatrix(), PLearn::endl(), PLearn::fillItSymmetric(), gram(), PLearn::identityMatrix(), PLearn::Kernel::is_symmetric, PLearn::TMat< T >::isSymmetric(), PLearn::TMat< T >::length(), PLASSERT, PLASSERT_MSG, PLCHECK, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), PLearn::savePMat(), PLearn::Kernel::setDataForKernelMatrix(), PLearn::solveLinearSystemByCholesky(), PLearn::TMat< T >::subMatColumns(), PLearn::tostring(), PLearn::transpose(), and PLearn::TMat< T >::width().
Referenced by fprop().
{ PLASSERT( kernel ); PLASSERT( inputs.length() == targets.length() ); const int trainlength = inputs.length(); const int targetsize = targets.width(); // The RHS matrix (when solving the linear system Gram*Params=RHS) is made // up of two parts: the regression targets themselves, and the identity // matrix if we requested them (for confidence intervals). After solving // the linear system, set the gram-inverse appropriately. int rhs_width = targetsize + (compute_inverse? trainlength : 0); tmp_rhs.resize(trainlength, rhs_width); tmp_rhs.subMatColumns(0, targetsize) << targets; if (compute_inverse) { Mat rhs_identity = tmp_rhs.subMatColumns(targetsize, trainlength); identityMatrix(rhs_identity); } // Compute Gram Matrix and add weight decay to diagonal kernel->setDataForKernelMatrix(inputs); gram.resize(trainlength, trainlength); kernel->computeGramMatrix(gram); addToDiagonal(gram, noise); // The PLearn code relies on the matrix actually being symmetric in memory // (assumption which LAPACK does not make). Symmetrize the matrix. PLCHECK(kernel->is_symmetric); PLASSERT_MSG(gram.isSymmetric(false), "Gram matrix is not symmetric"); fillItSymmetric(gram); // Save the Gram matrix if requested if (save_gram_matrix) { static int counter = 1; string filename = expdir / ("gram_matrix_" + tostring(counter++) + ".pmat"); savePMat(filename, gram); } // Dump a fragment of the Gram Matrix to the debug log DBG_MODULE_LOG << "Gram fragment: " << gram(0,0) << ' ' << gram(1,0) << ' ' << gram(1,1) << endl; // Compute Cholesky decomposition and solve the linear system alpha_t.resize(trainlength, rhs_width); L.resize(trainlength, trainlength); tmp_chol.resize(trainlength); solveLinearSystemByCholesky(gram, tmp_rhs, alpha_t, &L, &tmp_chol); // Must return transpose here since the code has been modified to work with // a transposed alpha, to better interface with lapack (much faster in the // latter case to avoid superfluous transposes). if (compute_inverse) { inv = alpha_t.subMatColumns(targetsize, trainlength); alpha_t = transpose(alpha_t.subMatColumns(0, targetsize)); } else alpha_t = transpose(alpha_t); }
void PLearn::GaussianProcessNLLVariable::fprop | ( | ) | [virtual] |
compute output given input
Implements PLearn::Variable.
Definition at line 166 of file GaussianProcessNLLVariable.cc.
References PLearn::TMat< T >::column(), PLearn::dot(), fbpropFragments(), PLearn::VarArray::fprop(), i, PLearn::TMat< T >::length(), m, m_allow_bprop, m_alpha_t, m_cholesky_gram, m_cholesky_tmp, m_expdir, m_gram, m_hyperparam_vars, m_inputs, m_inverse_gram, m_kernel, m_noise, M_PI, m_rhs_tmp, m_save_gram_matrix, m_targets, n, pl_log, PLearn::TMat< T >::row(), PLearn::Variable::value, and PLearn::TMat< T >::width().
{ // logVarray(m_hyperparam_vars, "FProp current hyperparameters:", true); // Ensure that the current hyperparameter variable values are propagated // into kernel options m_hyperparam_vars.fprop(); fbpropFragments(m_kernel, m_noise, m_inputs, m_targets, m_allow_bprop, m_save_gram_matrix, m_expdir, m_gram, m_cholesky_gram, m_alpha_t, m_inverse_gram, m_cholesky_tmp, m_rhs_tmp); // Assuming y is a column vector... For multivariate targets, we // separately dot each column of the targets with corresponding columns of // alpha, and add as many of the other two terms as there are variables // // 0.5 * y'*alpha + sum(log(diag(L))) + 0.5*n*log(2*pi) // // Don't forget that alpha_t is transposed const int n = m_alpha_t.width(); const int m = m_alpha_t.length(); real logdet_log2pi = 0; for (int i=0 ; i<n ; ++i) logdet_log2pi += pl_log(m_cholesky_gram(i,i)); logdet_log2pi += 0.5 * n * pl_log(2*M_PI); real nll = 0; for (int i=0 ; i<m ; ++i) nll += 0.5*dot(m_targets.column(i), m_alpha_t.row(i)) + logdet_log2pi; value[0] = nll; }
OptionList & PLearn::GaussianProcessNLLVariable::getOptionList | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 72 of file GaussianProcessNLLVariable.cc.
OptionMap & PLearn::GaussianProcessNLLVariable::getOptionMap | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 72 of file GaussianProcessNLLVariable.cc.
RemoteMethodMap & PLearn::GaussianProcessNLLVariable::getRemoteMethodMap | ( | ) | const [virtual] |
Reimplemented from PLearn::Object.
Definition at line 72 of file GaussianProcessNLLVariable.cc.
const Mat& PLearn::GaussianProcessNLLVariable::gram | ( | ) | const [inline] |
Accessor to the last computed gram matrix in an fprop.
Definition at line 148 of file GaussianProcessNLLVariable.h.
Referenced by fbpropFragments().
{ return m_gram; }
const Mat& PLearn::GaussianProcessNLLVariable::gramInverse | ( | ) | const [inline] |
Accessor to the last computed gram matrix inverse in an fprop.
Definition at line 151 of file GaussianProcessNLLVariable.h.
{ return m_inverse_gram; }
void PLearn::GaussianProcessNLLVariable::logVarray | ( | const VarArray & | varr, |
const string & | title = "" , |
||
bool | debug = false |
||
) | [static] |
Minor utility function to dump the contents of a varray to a log.
Definition at line 405 of file GaussianProcessNLLVariable.cc.
References PLearn::endl(), PLearn::Variable::getName(), i, n, PLearn::right(), PLearn::TVec< T >::size(), PLearn::tostring(), and PLearn::Variable::value.
Referenced by PLearn::GaussianProcessRegressor::hyperOptimize().
{ string entry = title + '\n'; for (int i=0, n=varr.size() ; i<n ; ++i) { entry += right(varr[i]->getName(), 35) + ": " + tostring(varr[i]->value[0]); if (i < n-1) entry += '\n'; } if (debug) { DBG_MODULE_LOG << entry << endl; } else { MODULE_LOG << entry << endl; } }
void PLearn::GaussianProcessNLLVariable::makeDeepCopyFromShallowCopy | ( | CopiesMap & | copies | ) | [virtual] |
Transforms a shallow copy into a deep copy.
Reimplemented from PLearn::NaryVariable.
Definition at line 116 of file GaussianProcessNLLVariable.cc.
References PLearn::deepCopyField(), m_alpha_buf, m_alpha_t, m_cholesky_gram, m_cholesky_tmp, m_gram, m_gram_derivative, m_hyperparam_names, m_hyperparam_vars, m_inputs, m_inverse_gram, m_kernel, m_rhs_tmp, m_targets, and PLearn::NaryVariable::makeDeepCopyFromShallowCopy().
{ inherited::makeDeepCopyFromShallowCopy(copies); deepCopyField(m_kernel, copies); deepCopyField(m_inputs, copies); deepCopyField(m_targets, copies); deepCopyField(m_hyperparam_names,copies); deepCopyField(m_hyperparam_vars, copies); deepCopyField(m_gram, copies); deepCopyField(m_gram_derivative, copies); deepCopyField(m_cholesky_gram, copies); deepCopyField(m_alpha_t, copies); deepCopyField(m_alpha_buf, copies); deepCopyField(m_inverse_gram, copies); deepCopyField(m_cholesky_tmp, copies); deepCopyField(m_rhs_tmp, copies); }
Recomputes the length l and width w that this variable should have, according to its parent variables.
This is used for ex. by sizeprop() The default version stupidly returns the current dimensions, so make sure to overload it in subclasses if this is not appropriate.
Reimplemented from PLearn::Variable.
Definition at line 101 of file GaussianProcessNLLVariable.cc.
{
// This is always the case for this variable
l = 1;
w = 1;
}
Reimplemented from PLearn::NaryVariable.
Definition at line 161 of file GaussianProcessNLLVariable.h.
Whether bprops are allowed.
Definition at line 191 of file GaussianProcessNLLVariable.h.
Mat PLearn::GaussianProcessNLLVariable::m_alpha_buf [mutable, protected] |
Temporary buffer to hold the transpose of m_alpha_t; used for the alpha() accessor and outside-world interface.
Definition at line 209 of file GaussianProcessNLLVariable.h.
Referenced by alpha(), and makeDeepCopyFromShallowCopy().
Mat PLearn::GaussianProcessNLLVariable::m_alpha_t [protected] |
Solution of the linear system gram*alpha = targets.
This is actually stored as a transpose to interface better with lapack.
Definition at line 205 of file GaussianProcessNLLVariable.h.
Referenced by alpha(), bprop(), fprop(), and makeDeepCopyFromShallowCopy().
Holds the Cholesky decomposition of m_gram.
Definition at line 201 of file GaussianProcessNLLVariable.h.
Referenced by fprop(), and makeDeepCopyFromShallowCopy().
Temporary storage for the Cholesky decomposition.
Definition at line 215 of file GaussianProcessNLLVariable.h.
Referenced by fprop(), and makeDeepCopyFromShallowCopy().
Expdir where to save the Gram Matrix, if 'save_gram_matrix' requested.
Definition at line 80 of file GaussianProcessNLLVariable.h.
Referenced by fprop().
Mat PLearn::GaussianProcessNLLVariable::m_gram [protected] |
Holds the Gram matrix.
Definition at line 194 of file GaussianProcessNLLVariable.h.
Referenced by fprop(), and makeDeepCopyFromShallowCopy().
Holds the derivative of the Gram matrix with respect to an hyperparameter.
Definition at line 198 of file GaussianProcessNLLVariable.h.
Referenced by bprop(), and makeDeepCopyFromShallowCopy().
TVec<string> PLearn::GaussianProcessNLLVariable::m_hyperparam_names [protected] |
Name of each hyperparameter contained in hyperparam_vars.
The name should be such that m_kernel->computeGramMatrixDerivative works.
Definition at line 184 of file GaussianProcessNLLVariable.h.
Referenced by bprop(), and makeDeepCopyFromShallowCopy().
Variables standing for each hyperparameter, used to accumulate the gradient w.r.t.
them.
Definition at line 188 of file GaussianProcessNLLVariable.h.
Referenced by bprop(), fprop(), and makeDeepCopyFromShallowCopy().
Mat PLearn::GaussianProcessNLLVariable::m_inputs [protected] |
Matrix of inputs.
Definition at line 177 of file GaussianProcessNLLVariable.h.
Referenced by build_(), fprop(), and makeDeepCopyFromShallowCopy().
Inverse of the Gram matrix.
Definition at line 212 of file GaussianProcessNLLVariable.h.
Referenced by bprop(), fprop(), and makeDeepCopyFromShallowCopy().
Kernel* PLearn::GaussianProcessNLLVariable::m_kernel [protected] |
Current kernel we should be using.
Definition at line 171 of file GaussianProcessNLLVariable.h.
Referenced by bprop(), build_(), fprop(), and makeDeepCopyFromShallowCopy().
real PLearn::GaussianProcessNLLVariable::m_noise [protected] |
Observation noise to be added to the diagonal of the Gram matrix.
Definition at line 174 of file GaussianProcessNLLVariable.h.
Referenced by fprop().
Mat PLearn::GaussianProcessNLLVariable::m_rhs_tmp [protected] |
Temporary storage for holding the right-hand-side to be solved by Cholesky.
Definition at line 218 of file GaussianProcessNLLVariable.h.
Referenced by fprop(), and makeDeepCopyFromShallowCopy().
If true, the Gram matrix is saved before undergoing Cholesky decomposition; useful for debugging if the matrix is quasi-singular.
Definition at line 77 of file GaussianProcessNLLVariable.h.
Referenced by declareOptions(), and fprop().
Mat PLearn::GaussianProcessNLLVariable::m_targets [protected] |
Matrix of regression targets.
Definition at line 180 of file GaussianProcessNLLVariable.h.
Referenced by build_(), fprop(), and makeDeepCopyFromShallowCopy().