PLearn 0.1
Public Types | Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Protected Types | Protected Member Functions | Static Protected Member Functions | Protected Attributes | Private Types | Private Member Functions
PLearn::GaussianProcessRegressor Class Reference

Implements Gaussian Process Regression (GPR) with an arbitrary kernel. More...

#include <GaussianProcessRegressor.h>

Inheritance diagram for PLearn::GaussianProcessRegressor:
Inheritance graph
[legend]
Collaboration diagram for PLearn::GaussianProcessRegressor:
Collaboration graph
[legend]

List of all members.

Public Types

typedef PConditionalDistribution inherited

Public Member Functions

 GaussianProcessRegressor ()
virtual ~GaussianProcessRegressor ()
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.
virtual void setInput (const Vec &input) const
 Set the input part before using the inherited methods.
virtual double log_density (const Vec &x) const
 return log of probability density log(p(x))
virtual Vec expectation () const
 return E[X]
virtual void expectation (Vec expected_y) const
 return E[X]
virtual Mat variance () const
 return Var[X]
virtual void variance (Vec diag_variances) const
virtual void build ()
 Simply calls inherited::build() then build_()
virtual void forget ()
 *** SUBCLASS WRITING: ***
virtual int outputsize () const
 SUBCLASS WRITING: override this so that it returns the size of this learner's output, as a function of its inputsize(), targetsize() and set options.
virtual void train ()
 The role of the train method is to bring the learner up to stage==nstages, updating the stats with training costs measured on-line in the process.
virtual void computeOutput (const Vec &input, Vec &output) const
 Produce outputs according to what is specified in outputs_def.
virtual void computeCostsFromOutputs (const Vec &input, const Vec &output, const Vec &target, Vec &costs) const
 This should be defined in subclasses to compute the weighted costs from already computed output.
virtual void computeOutputAndCosts (const Vec &input, const Vec &target, Vec &output, Vec &costs) const
 Default calls computeOutput and computeCostsFromOutputs You may overload this if you have a more efficient way to compute both output and weighted costs at the same time.
virtual void computeCostsOnly (const Vec &input, const Vec &target, Vec &costs) const
 Default calls computeOutputAndCosts This may be overloaded if there is a more efficient way to compute the costs directly, without computing the whole output vector.
virtual TVec< string > getTestCostNames () const
 This should return the names of the costs computed by computeCostsFromOutpus.
virtual TVec< string > getTrainCostNames () const
 This should return the names of the objective costs that the train method computes and for which it updates the VecStatsCollector train_stats.
virtual int nTestCosts () const
 Caches getTestCostNames().size() in an internal variable the first time it is called, and then returns the content of this variable.
virtual int nTrainCosts () const
 Caches getTrainCostNames().size() in an internal variable the first time it is called, and then returns the content of this variable.
int getTestCostIndex (const string &costname) const
 returns the index of the given cost in the vector of testcosts (returns -1 if not found)
int getTrainCostIndex (const string &costname) const
 returns the index of the given cost in the vector of traincosts (objectives) (returns -1 if not found)
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual GaussianProcessRegressordeepCopy (CopiesMap &copies) const
 GaussianProcessRegressor ()
 Default constructor.
virtual void setTrainingSet (VMat training_set, bool call_forget=true)
 Isolate the training inputs and create and ExtendedVMatrix (to include a bias) if required.
virtual int outputsize () const
 Returns the size of this learner's output, (which typically may depend on its inputsize(), targetsize() and set options).
virtual void forget ()
 (Re-)initializes the PLearner in its fresh state (that state may depend on the 'seed' option) and sets 'stage' back to 0 (this is the stage of a fresh learner!).
virtual void train ()
 The role of the train method is to bring the learner up to stage==nstages, updating the train_stats collector with training costs measured on-line in the process.
virtual void computeOutput (const Vec &input, Vec &output) const
 Computes the output from the input.
virtual void computeCostsFromOutputs (const Vec &input, const Vec &output, const Vec &target, Vec &costs) const
 Computes the costs from already computed output.
virtual bool computeConfidenceFromOutput (const Vec &input, const Vec &output, real probability, TVec< pair< real, real > > &intervals) const
 Compute the confidence intervals based on the GP output variance.
virtual void computeOutputCovMat (const Mat &inputs, Mat &outputs, TVec< Mat > &covariance_matrices) const
 Compute the posterior mean and covariance matrix of a set of inputs.
virtual TVec< std::string > getTestCostNames () const
 Returns the names of the costs computed by computeCostsFromOutputs (and thus the test method).
virtual TVec< std::string > getTrainCostNames () const
 Returns the names of the objective costs that the train method computes and for which it updates the VecStatsCollector train_stats.
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual GaussianProcessRegressordeepCopy (CopiesMap &copies) const
virtual void build ()
 Simply calls inherited::build() then build_()
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.

Static Public Member Functions

static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()
static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

PP< Kernelkernel
int n_outputs
Vec noise_sd
string Gram_matrix_normalization
int max_nb_evectors
Mat alpha
Vec Kxxi
real Kxx
Mat K
Mat eigenvectors
Vec eigenvalues
Vec meanK
real mean_allK
Ker m_kernel
 Kernel to use for the computation.
real m_weight_decay
 Weight-decay coefficient (default = 0).
bool m_include_bias
 Whether to include a bias term in the regression (true by default).
bool m_compute_confidence
 Whether to perform the additional train-time computations required to compute confidence intervals.
real m_confidence_epsilon
 Small regularization to be added post-hoc to the computed output covariance matrix and confidence intervals; this is mostly used as a disaster prevention device, to avoid negative predictive variance.
TVec< pair< string, string > > m_hyperparameters
 List of hyperparameters to optimize.
pair< string, string > m_ARD_hyperprefix_initval
 If the kernel support automatic relevance determination (ARD; e.g.
PP< Optimizerm_optimizer
 Specification of the optimizer to use for train-time hyperparameter optimization.
bool m_save_gram_matrix
 If true, the Gram matrix is saved before undergoing Cholesky each decomposition; useful for debugging if the matrix is quasi-singular.
string m_solution_algorithm
 Solution algorithm used for the regression.
TVec< intm_active_set_indices
 If a sparse approximation algorithm is used (e.g.

Static Public Attributes

static StaticInitializer _static_initializer_

Protected Types

enum  { AlgoExact, AlgoProjectedProcess }
 Solution algorithm in enum form to avoid lengthy string-compare each time we want to compute a confidence interval. More...

Protected Member Functions

void inverseCovTimesVec (real sigma, Vec v, Vec Cinv_v) const
real QFormInverse (real sigma2, Vec u) const
real BayesianCost ()
 to be used for hyper-parameter selection, this is the negative log-likelihood of the training data.
void computeOutputAux (const Vec &input, Vec &output, Vec &kernel_evaluations) const
 Utility internal function for computeOutput, which accepts the destination for kernel evaluations in argument, and performs no error checking nor vector resizes.
PP< GaussianProcessNLLVariablehyperOptimize (const Mat &inputs, const Mat &targets, VarArray &hyperparam_vars)
 Optimize the hyperparameters if any.
void trainProjectedProcess (const Mat &all_training_inputs, const Mat &sub_training_inputs, const Mat &all_training_targets)
 Update the parameters required for the Projected Process approximation, assuming hyperparameters have already been optimized.

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares this class' options.
static void declareOptions (OptionList &ol)
 Declares the class options.

Protected Attributes

Mat m_alpha
 Matrix of learned parameters, determined from the equation.
Mat m_gram_inverse
 Inverse of the Gram matrix, used to compute confidence intervals (must be saved since the confidence intervals are obtained from the equation.
Mat m_subgram_inverse
 Inverse of the sub-Gram matrix, i.e.
Vec m_target_mean
 Mean of the targets, if the option 'include_bias' is true.
Mat m_training_inputs
 Saved version of the training set inputs, which must be kept along for carrying out kernel evaluations with the test point.
Vec m_kernel_evaluations
 Buffer for kernel evaluations at test time.
Vec m_gram_inverse_product
 Buffer for the product of the gram inverse with kernel evaluations.
TVec< pair< real, real > > m_intervals
 Buffer to hold confidence intervals when computing costs from outputs.
Mat m_gram_traintest_inputs
 Buffer to hold the Gram matrix of train inputs with test inputs.
Mat m_gram_inv_traintest_product
 Buffer to hold the product of the gram inverse with gram_traintest_inputs.
Mat m_sigma_reductor
 Buffer to hold the sigma reductor for m_gram_inverse_product.
enum
PLearn::GaussianProcessRegressor:: { ... }  
m_algorithm_enum

Private Types

typedef PLearner inherited

Private Member Functions

void build_ ()
 This does the actual building.
void build_ ()
 This does the actual building.

Detailed Description

Implements Gaussian Process Regression (GPR) with an arbitrary kernel.

Simple Gaussian Process Regression.

prediction = E[E[y|x]|training_set] = E[y|x,training_set] prediction[j] = sum_i alpha_{ji} K(x,x_i) = (K(x,x_i))_i' inv(K+sigma^2[j] I) targets

Var[y[j]|x,training_set] = Var[E[y[j]|x]|training_set] + E[Var[y[j]|x]|training_set] where Var[E[y[j]|x]|training_set] = K(x,x)- (K(x,x_i))_i' inv(K+sigma^2[j]) (K(x,x_i))_i and E[Var[y[j]|x]|training_set] = Var[y[j]|x] = sigma^2[j] = noise

costs: MSE = sum_j (y[j] - prediction[j])^2 NLL = sum_j log Normal(y[j];prediction[j],Var[y[j]|x,training_set])

Given a kernel K(x,y) = phi(x)'phi(y), where phi(x) is the projection of a vector x into feature space, this class implements a version of Gaussian Process Regression, giving the prediction at x as

f(x) = k(x)'(M + lambda I)^-1 y,

where x is the test vector where to estimate the response, k(x) is the vector of kernel evaluations between the test vector and the elements of the training set, namely

k(x) = (K(x,x1), K(x,x2), ..., K(x,xN))',

M is the Gram Matrix on the elements of the training set, i.e. the matrix where the element (i,j) is equal to K(xi, xj), lambda is the VARIANCE of the observation noise (and can be interpreted as a weight decay coefficient), and y is the vector of training-set targets.

The uncertainty in a prediction can be computed by calling computeConfidenceFromOutput. Furthermore, if desired, this learner allows optimization of the kernel hyperparameters by direct optimization of the marginal likelihood w.r.t. the hyperparameters. This mechanism relies on a user-provided Optimizer (see the 'optimizer' option) and does not rely on the PLearn HyperLearner system.

GaussianProcessRegressor produces the following train costs:

and the following test costs:

The disadvantage of this learner is that its training time is O(N^3) in the number of training examples (due to the matrix inversion). When saving the learner, the training set inputs must be saved, along with an additional matrix of length number-of-training-examples, and width number-of-targets.

To alleviate the computational bottleneck of the exact method, the sparse approximation method of Projected Process is also available. This method requires identifying M datapoints in the training set called the active set, although it makes use of all N training points for computing the likelihood. The computational complexity of the approach is then O(NM^2). Note that in the current implementation, hyperparameter optimization is performed using ONLY the active set (called the "Subset of Data" method in the Rasmussen & Williams book). Making use of the full set of datapoints is more computationally expensive and would require substantial updates to the PLearn Kernel class (to efficiently support asymmetric kernel-matrix gradient). This may come later.

Definition at line 72 of file distributions/DEPRECATED/GaussianProcessRegressor.h.


Member Typedef Documentation

Reimplemented from PLearn::PConditionalDistribution.

Definition at line 116 of file regressors/GaussianProcessRegressor.h.


Member Enumeration Documentation

anonymous enum [protected]

Solution algorithm in enum form to avoid lengthy string-compare each time we want to compute a confidence interval.

Enumerator:
AlgoExact 
AlgoProjectedProcess 

Definition at line 364 of file regressors/GaussianProcessRegressor.h.


Constructor & Destructor Documentation

PLearn::GaussianProcessRegressor::GaussianProcessRegressor ( )
PLearn::GaussianProcessRegressor::~GaussianProcessRegressor ( ) [virtual]
PLearn::GaussianProcessRegressor::GaussianProcessRegressor ( )

Default constructor.


Member Function Documentation

string PLearn::GaussianProcessRegressor::_classname_ ( ) [static]
static string PLearn::GaussianProcessRegressor::_classname_ ( ) [static]

Reimplemented from PLearn::PConditionalDistribution.

OptionList & PLearn::GaussianProcessRegressor::_getOptionList_ ( ) [static]
static OptionList& PLearn::GaussianProcessRegressor::_getOptionList_ ( ) [static]

Reimplemented from PLearn::PConditionalDistribution.

RemoteMethodMap & PLearn::GaussianProcessRegressor::_getRemoteMethodMap_ ( ) [static]
static RemoteMethodMap& PLearn::GaussianProcessRegressor::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::PConditionalDistribution.

bool PLearn::GaussianProcessRegressor::_isa_ ( const Object o) [static]
static bool PLearn::GaussianProcessRegressor::_isa_ ( const Object o) [static]

Reimplemented from PLearn::PConditionalDistribution.

Object * PLearn::GaussianProcessRegressor::_new_instance_for_typemap_ ( ) [static]
static Object* PLearn::GaussianProcessRegressor::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::PConditionalDistribution.

StaticInitializer GaussianProcessRegressor::_static_initializer_ & PLearn::GaussianProcessRegressor::_static_initialize_ ( ) [static]
static void PLearn::GaussianProcessRegressor::_static_initialize_ ( ) [static]

Reimplemented from PLearn::PConditionalDistribution.

real PLearn::GaussianProcessRegressor::BayesianCost ( ) [protected]

to be used for hyper-parameter selection, this is the negative log-likelihood of the training data.

compute the "training cost" = negative log-likelihood of the training data = 0.5*sum_i (log det(K+sigma[i]^2 I) + y' inv(K+sigma[i]^2 I) y + l log(2 pi))

Definition at line 420 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References eigenvalues, eigenvectors, i, j, K, PLearn::TMat< T >::length(), Log2Pi, m, n_outputs, noise_sd, and PLearn::safeflog().

{
    int l=K.length();
    int m=eigenvectors.length();
    real nll = l*n_outputs*Log2Pi;
    for (int i=0;i<n_outputs;i++)
    {
        real sigma2_i=noise_sd[i]*noise_sd[i];
        //nll += QFormInverse(sigma2_i,targets); // y'*inv(C)*y 
        // add the log det(K+sigma_i^2 I) contribution
        if (m<l)
            // the last l-m eigenvalues are sigma_i^2
            nll += (l-m)*safeflog(sigma2_i); 
        // while the first m ones are lambda_i + sigma_i^2
        for (int j=0;j<m;j++)
            nll += safeflog(eigenvalues[i]+sigma2_i);
    }
    nll *= 0.5;
    return nll;
}

Here is the call graph for this function:

void PLearn::GaussianProcessRegressor::build ( ) [virtual]

Simply calls inherited::build() then build_()

Reimplemented from PLearn::PConditionalDistribution.

Definition at line 166 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References PLearn::PConditionalDistribution::build(), and build_().

Here is the call graph for this function:

virtual void PLearn::GaussianProcessRegressor::build ( ) [virtual]

Simply calls inherited::build() then build_()

Reimplemented from PLearn::PConditionalDistribution.

void PLearn::GaussianProcessRegressor::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::PConditionalDistribution.

void PLearn::GaussianProcessRegressor::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::PConditionalDistribution.

Definition at line 135 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References PLearn::PPath::absolute(), alpha, PLearn::PLearner::expdir, PLearn::force_mkdir(), K, Kxxi, PLearn::VMat::length(), meanK, n_outputs, outputsize(), PLERROR, PLearn::TVec< T >::resize(), PLearn::TMat< T >::resize(), and PLearn::PLearner::train_set.

Referenced by build().

{
    if(expdir!="")
    {
        if(!force_mkdir(expdir))
            PLERROR("In GaussianProcessRegressor Could not create experiment directory %s",expdir.absolute().c_str());
        expdir = expdir.absolute() / "";
        // expdir = abspath(expdir);
    }
  
    if (train_set)
    {
        K.resize(train_set->length(),train_set->length());
        Kxxi.resize(train_set->length());
        alpha.resize(outputsize(),train_set->length());
        meanK.resize(train_set->length());
        n_outputs = train_set->targetsize();
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

virtual string PLearn::GaussianProcessRegressor::classname ( ) const [virtual]

Reimplemented from PLearn::PConditionalDistribution.

string PLearn::GaussianProcessRegressor::classname ( ) const [virtual]
bool PLearn::GaussianProcessRegressor::computeConfidenceFromOutput ( const Vec input,
const Vec output,
real  probability,
TVec< pair< real, real > > &  intervals 
) const [virtual]

Compute the confidence intervals based on the GP output variance.

Reimplemented from PLearn::PLearner.

Definition at line 567 of file regressors/GaussianProcessRegressor.cc.

References PLearn::dot(), PLearn::gauss_01_quantile(), i, PLearn::max(), n, PLASSERT, PLWARNING, PLearn::product(), PLearn::productScaleAcc(), PLearn::TVec< T >::size(), and PLearn::sqrt().

{
    if (! m_compute_confidence) {
        PLWARNING("GaussianProcessRegressor::computeConfidenceFromOutput: the option\n"
                  "'compute_confidence' must be true in order to compute valid\n"
                  "condidence intervals");
        return false;
    }

    // BIG assumption: assume that computeOutput has just been called and that
    // m_kernel_evaluations contains the right stuff.
    PLASSERT( m_kernel && m_gram_inverse.isNotNull() );
    real base_sigma_sq = m_kernel(input, input);
    m_gram_inverse_product.resize(m_kernel_evaluations.size());

    real sigma;
    if (m_algorithm_enum == AlgoExact) {
        product(m_gram_inverse_product, m_gram_inverse, m_kernel_evaluations);
        real sigma_reductor = dot(m_gram_inverse_product, m_kernel_evaluations);
        sigma = sqrt(max(real(0.),
                         base_sigma_sq - sigma_reductor + m_confidence_epsilon));
    }
    else if (m_algorithm_enum == AlgoProjectedProcess) {
        // From R&W eq. (8.27).
        product(m_gram_inverse_product, m_subgram_inverse, m_kernel_evaluations);
        productScaleAcc(m_gram_inverse_product, m_gram_inverse, m_kernel_evaluations,
                        -1.0, 1.0);
        real sigma_reductor = dot(m_gram_inverse_product, m_kernel_evaluations);
        sigma = sqrt(max(real(0.),
                         base_sigma_sq - sigma_reductor + m_confidence_epsilon));
    }

    // two-tailed
    const real multiplier = gauss_01_quantile((1+probability)/2);
    real half_width = multiplier * sigma;
    intervals.resize(output.size());
    for (int i=0, n=output.size() ; i<n ; ++i)
        intervals[i] = std::make_pair(output[i] - half_width,
                                      output[i] + half_width);
    return true;
}

Here is the call graph for this function:

virtual void PLearn::GaussianProcessRegressor::computeCostsFromOutputs ( const Vec input,
const Vec output,
const Vec target,
Vec costs 
) const [virtual]

Computes the costs from already computed output.

Implements PLearn::PLearner.

void PLearn::GaussianProcessRegressor::computeCostsFromOutputs ( const Vec input,
const Vec output,
const Vec target,
Vec costs 
) const [virtual]

This should be defined in subclasses to compute the weighted costs from already computed output.

NOTE: In exotic cases, the cost may also depend on some info in the input, that's why the method also gets so see it.

Implements PLearn::PLearner.

Definition at line 291 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References PLearn::diff(), expectation(), PLearn::gauss_log_density_var(), i, n_outputs, noise_sd, PLearn::PDistribution::outputs_def, PLearn::TVec< T >::resize(), PLearn::TVec< T >::subVec(), PLearn::var(), and variance().

Referenced by computeOutputAndCosts().

{
    Vec mu;
    static Vec var;
    int i0=0;
    if (outputs_def.find("e")!=string::npos)
    {
        mu = output.subVec(i0,n_outputs);
        i0+=n_outputs;
    }
    else
        mu = expectation();
    if (outputs_def.find("v")!=string::npos)
    {
        var = output.subVec(i0,n_outputs);
        i0+=n_outputs;
    }
    else
    {
        var.resize(n_outputs);
        variance(var);
    }
    real mse = 0;
    real logdensity = 0;
    for (int i=0;i<n_outputs;i++)
    {
        real diff=mu[i] - target[i];
        mse += diff*diff;
        logdensity += gauss_log_density_var(target[i],mu[i],var[i]+noise_sd[i]*noise_sd[i]);
    }
    costs[0]=mse;
    costs[1]=logdensity;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::GaussianProcessRegressor::computeCostsOnly ( const Vec input,
const Vec target,
Vec costs 
) const [virtual]

Default calls computeOutputAndCosts This may be overloaded if there is a more efficient way to compute the costs directly, without computing the whole output vector.

Reimplemented from PLearn::PLearner.

Definition at line 333 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References computeOutputAndCosts(), outputsize(), PLearn::TVec< T >::resize(), and PLearn::PLearner::tmp_output.

{
    static Vec tmp_output;
    tmp_output.resize(outputsize());
    computeOutputAndCosts(input, target, tmp_output, costs);
}

Here is the call graph for this function:

virtual void PLearn::GaussianProcessRegressor::computeOutput ( const Vec input,
Vec output 
) const [virtual]

Computes the output from the input.

Reimplemented from PLearn::PConditionalDistribution.

void PLearn::GaussianProcessRegressor::computeOutput ( const Vec input,
Vec output 
) const [virtual]

Produce outputs according to what is specified in outputs_def.

Reimplemented from PLearn::PConditionalDistribution.

Definition at line 261 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References expectation(), n_outputs, PLearn::PDistribution::outputs_def, PLearn::TVec< T >::subVec(), and variance().

Referenced by computeOutputAndCosts().

{
    setInput_const(input);
    int i0=0;
    if (outputs_def.find("e") != string::npos)
    {
        expectation(output.subVec(i0,n_outputs));
        i0+=n_outputs;
    }
    if (outputs_def.find("v") != string::npos)
    {
        variance(output.subVec(i0,n_outputs));
        i0+=n_outputs;
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::GaussianProcessRegressor::computeOutputAndCosts ( const Vec input,
const Vec target,
Vec output,
Vec costs 
) const [virtual]

Default calls computeOutput and computeCostsFromOutputs You may overload this if you have a more efficient way to compute both output and weighted costs at the same time.

Reimplemented from PLearn::PLearner.

Definition at line 326 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References computeCostsFromOutputs(), and computeOutput().

Referenced by computeCostsOnly().

{
    computeOutput(input, output);
    computeCostsFromOutputs(input, output, target, costs);
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::GaussianProcessRegressor::computeOutputAux ( const Vec input,
Vec output,
Vec kernel_evaluations 
) const [protected]

Utility internal function for computeOutput, which accepts the destination for kernel evaluations in argument, and performs no error checking nor vector resizes.

Definition at line 504 of file regressors/GaussianProcessRegressor.cc.

References PLearn::TVec< T >::fill(), PLearn::TVec< T >::hasMissing(), MISSING_VALUE, PLearn::product(), and PLearn::TVec< T >::size().

{
    if (input.hasMissing()) {
        output.fill(MISSING_VALUE);
        kernel_evaluations.fill(MISSING_VALUE);
        return;
    }
    
    m_kernel->evaluate_all_i_x(input, kernel_evaluations);

    // Finally compute k(x,x_i) * (K + \lambda I)^-1 y.
    // This expression does not change depending on whether we are using
    // the exact algorithm or the projected-process approximation.
    product(Mat(1, output.size(), output),
            Mat(1, kernel_evaluations.size(), kernel_evaluations),
            m_alpha);

    if (m_include_bias)
        output += m_target_mean;
}

Here is the call graph for this function:

void PLearn::GaussianProcessRegressor::computeOutputCovMat ( const Mat inputs,
Mat outputs,
TVec< Mat > &  covariance_matrices 
) const [virtual]

Compute the posterior mean and covariance matrix of a set of inputs.

Note that if any of the inputs contains a missing value (NaN), then the whole covariance matrix is NaN (in the current implementation).

Reimplemented from PLearn::PLearner.

Definition at line 614 of file regressors/GaussianProcessRegressor.cc.

References PLearn::TMat< T >::fill(), PLearn::TMat< T >::hasMissing(), i, j, PLearn::TMat< T >::length(), PLearn::max(), MISSING_VALUE, PLearn::TMat< T >::mod(), N, PLASSERT, PLearn::product(), PLearn::productTranspose(), PLearn::productTransposeScaleAcc(), PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), and PLearn::TMat< T >::width().

{
    PLASSERT( m_kernel && m_alpha.isNotNull() && m_training_inputs.size() > 0 );
    PLASSERT( m_alpha.width()  == outputsize() );
    PLASSERT( m_alpha.length() == m_training_inputs.length() );
    PLASSERT( inputs.width()   == m_training_inputs.width()  );
    PLASSERT( inputs.width()   == inputsize() );
    const int N = inputs.length();
    const int M = outputsize();
    const int T = m_training_inputs.length();
    outputs.resize(N, M);
    covariance_matrices.resize(M);

    // Preallocate space for the covariance matrix, and since all outputs share
    // the same matrix, copy it into the remaining elements of
    // covariance_matrices
    Mat& covmat = covariance_matrices[0];
    covmat.resize(N, N);
    for (int j=1 ; j<M ; ++j)
        covariance_matrices[j] = covmat;

    // Start by computing the matrix of kernel evaluations between the train
    // and test outputs, and compute the output
    m_gram_traintest_inputs.resize(N, T);
    bool has_missings = false;
    for (int i=0 ; i<N ; ++i) {
        Vec cur_traintest_kereval = m_gram_traintest_inputs(i);
        Vec cur_output = outputs(i);
        computeOutputAux(inputs(i), cur_output, cur_traintest_kereval);
        has_missings = has_missings || inputs(i).hasMissing();
    }

    // If any missings found in the inputs, don't bother with computing a
    // covariance matrix
    if (has_missings) {
        covmat.fill(MISSING_VALUE);
        return;
    }

    // Next compute the kernel evaluations between the test inputs; more or
    // less lifted from Kernel.cc ==> must see with Olivier how to better
    // factor this code
    Mat& K = covmat;

    PLASSERT( K.width() == N && K.length() == N );
    const int mod = K.mod();
    real Kij;
    real* Ki;
    real* Kji;
    for (int i=0 ; i<N ; ++i) {
        Ki  = K[i];
        Kji = &K[0][i];
        const Vec& cur_input_i = inputs(i);
        for (int j=0 ; j<=i ; ++j, Kji += mod) {
            Kij = m_kernel->evaluate(cur_input_i, inputs(j));
            *Ki++ = Kij;
            if (j<i)
                *Kji = Kij;    // Assume symmetry, checked at build
        }
    }

    // The predictive covariance matrix is for the exact cast(c.f. Rasmussen
    // and Williams):
    //
    //    cov(f*) = K(X*,X*) - K(X*,X) [K(X,X) + sigma*I]^-1 K(X,X*)
    //
    // where X are the training inputs, and X* are the test inputs.
    //
    // For the projected process case, it is:
    //
    //    cov(f*) = K(X*,X*) - K(X*,X_m) K_mm^-1 K(X*,X_m)
    //               + sigma^2 K(X*,X_m) (sigma^2 K_mm + K_mn K_nm)^-1 K(X*,X_m)
    //
    // Note that all sigma^2's have been absorbed into their respective
    // cached terms, and in particular in this context sigma^2 is emphatically
    // not equal to the weight decay.
    m_gram_inv_traintest_product.resize(T,N);
    m_sigma_reductor.resize(N,N);

    if (m_algorithm_enum == AlgoExact) {    
        productTranspose(m_gram_inv_traintest_product, m_gram_inverse,
                         m_gram_traintest_inputs);
        product(m_sigma_reductor, m_gram_traintest_inputs,
                m_gram_inv_traintest_product);
    }
    else if (m_algorithm_enum == AlgoProjectedProcess) {
        productTranspose(m_gram_inv_traintest_product, m_subgram_inverse,
                         m_gram_traintest_inputs);
        productTransposeScaleAcc(m_gram_inv_traintest_product, m_gram_inverse,
                                 m_gram_traintest_inputs, -1.0, 1.0);
        product(m_sigma_reductor, m_gram_traintest_inputs,
                m_gram_inv_traintest_product);
    }
    
    covmat -= m_sigma_reductor;

    // As a preventive measure, never output negative variance, even though
    // this does not garantee the non-negative-definiteness of the matrix
    for (int i=0 ; i<N ; ++i)
        covmat(i,i) = max(real(0.0), covmat(i,i) + m_confidence_epsilon);
}

Here is the call graph for this function:

void PLearn::GaussianProcessRegressor::declareOptions ( OptionList ol) [static, protected]

Declares this class' options.

Reimplemented from PLearn::PConditionalDistribution.

Definition at line 106 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References PLearn::OptionBase::buildoption, PLearn::declareOption(), PLearn::PConditionalDistribution::declareOptions(), Gram_matrix_normalization, kernel, max_nb_evectors, and noise_sd.

{
    declareOption(ol, "kernel", &GaussianProcessRegressor::kernel, OptionBase::buildoption, 
                  "The kernel (seen as a symmetric, two-argument function of a pair of input points)\n"
                  "that corresponds to the prior covariance on the function to be learned.\n");

    declareOption(ol, "noise_sd", &GaussianProcessRegressor::noise_sd, OptionBase::buildoption,
                  "Output noise std. dev. (one element per output).\n");


    declareOption(ol, "max_nb_evectors", &GaussianProcessRegressor::max_nb_evectors, OptionBase::buildoption,
                  "Maximum number of eigenvectors of the Gram matrix to compute (or -1 if all should be computed).\n");


    declareOption(ol, "Gram_matrix_normalization", &GaussianProcessRegressor::Gram_matrix_normalization, 
                  OptionBase::buildoption,
                  "normalization method to apply to Gram matrix. Expected values are:\n"
                  "\"none\": no normalization\n"
                  "\"centering_a_dot_product\": this is the kernel PCA centering\n"
                  "     K_{ij} <-- K_{ij} - mean_i(K_ij) - mean_j(K_ij) + mean_{ij}(K_ij)\n"
                  "\"centering_a_distance\": this is the MDS transformation of squared distances to dot products\n"
                  "     K_{ij} <-- -0.5(K_{ij} - mean_i(K_ij) - mean_j(K_ij) + mean_{ij}(K_ij))\n"
                  "\"divisive\": this is the spectral clustering and Laplacian eigenmaps normalization\n"
                  "     K_{ij} <-- K_{ij}/sqrt(mean_i(K_ij) mean_j(K_ij))\n");


    inherited::declareOptions(ol);
}

Here is the call graph for this function:

static void PLearn::GaussianProcessRegressor::declareOptions ( OptionList ol) [static, protected]

Declares the class options.

Reimplemented from PLearn::PConditionalDistribution.

static const PPath& PLearn::GaussianProcessRegressor::declaringFile ( ) [inline, static]
static const PPath& PLearn::GaussianProcessRegressor::declaringFile ( ) [inline, static]

Reimplemented from PLearn::PConditionalDistribution.

Definition at line 270 of file regressors/GaussianProcessRegressor.h.

:
virtual GaussianProcessRegressor* PLearn::GaussianProcessRegressor::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::PConditionalDistribution.

GaussianProcessRegressor * PLearn::GaussianProcessRegressor::deepCopy ( CopiesMap copies) const [virtual]
Vec PLearn::GaussianProcessRegressor::expectation ( ) const [virtual]

return E[X]

Definition at line 225 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References n_outputs, and PLearn::TVec< T >::resize().

Referenced by computeCostsFromOutputs(), and computeOutput().

{
    static Vec expected_target;
    expected_target.resize(n_outputs);
    expectation(expected_target);
    return expected_target;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::GaussianProcessRegressor::expectation ( Vec  expected_y) const [virtual]

return E[X]

Definition at line 218 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References alpha, PLearn::dot(), i, Kxxi, and n_outputs.

{
    for (int i=0;i<n_outputs;i++)
        expected_y[i] = dot(Kxxi,alpha(i));
}

Here is the call graph for this function:

virtual void PLearn::GaussianProcessRegressor::forget ( ) [virtual]

(Re-)initializes the PLearner in its fresh state (that state may depend on the 'seed' option) and sets 'stage' back to 0 (this is the stage of a fresh learner!).

Reimplemented from PLearn::PLearner.

void PLearn::GaussianProcessRegressor::forget ( ) [virtual]

*** SUBCLASS WRITING: ***

(Re-)initializes the PLearner in its fresh state (that state may depend on the 'seed' option) and sets 'stage' back to 0 (this is the stage of a fresh learner!)

A typical forget() method should do the following:

  • initialize the learner's parameters, using this random generator
  • stage = 0;

This method is typically called by the build_() method, after it has finished setting up the parameters, and if it deemed useful to set or reset the learner in its fresh state. (remember build may be called after modifying options that do not necessarily require the learner to restart from a fresh state...) forget is also called by the setTrainingSet method, after calling build(), so it will generally be called TWICE during setTrainingSet!

Reimplemented from PLearn::PLearner.

Definition at line 172 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References PLearn::PLearner::stage.

{
    stage = 0;
}
OptionList & PLearn::GaussianProcessRegressor::getOptionList ( ) const [virtual]
virtual OptionList& PLearn::GaussianProcessRegressor::getOptionList ( ) const [virtual]

Reimplemented from PLearn::PConditionalDistribution.

virtual OptionMap& PLearn::GaussianProcessRegressor::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::PConditionalDistribution.

OptionMap & PLearn::GaussianProcessRegressor::getOptionMap ( ) const [virtual]
RemoteMethodMap & PLearn::GaussianProcessRegressor::getRemoteMethodMap ( ) const [virtual]
virtual RemoteMethodMap& PLearn::GaussianProcessRegressor::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::PConditionalDistribution.

int PLearn::GaussianProcessRegressor::getTestCostIndex ( const string &  costname) const

returns the index of the given cost in the vector of testcosts (returns -1 if not found)

Reimplemented from PLearn::PLearner.

Definition at line 192 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References getTestCostNames(), i, and PLearn::TVec< T >::length().

{
    TVec<string> costnames = getTestCostNames();
    for(int i=0; i<costnames.length(); i++)
        if(costnames[i]==costname)
            return i;
    return -1;
}

Here is the call graph for this function:

TVec< string > PLearn::GaussianProcessRegressor::getTestCostNames ( ) const [virtual]

This should return the names of the costs computed by computeCostsFromOutpus.

Implements PLearn::PLearner.

Definition at line 189 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References getTrainCostNames().

Referenced by getTestCostIndex().

{ return getTrainCostNames(); }

Here is the call graph for this function:

Here is the caller graph for this function:

virtual TVec<std::string> PLearn::GaussianProcessRegressor::getTestCostNames ( ) const [virtual]

Returns the names of the costs computed by computeCostsFromOutputs (and thus the test method).

Implements PLearn::PLearner.

int PLearn::GaussianProcessRegressor::getTrainCostIndex ( const string &  costname) const

returns the index of the given cost in the vector of traincosts (objectives) (returns -1 if not found)

Reimplemented from PLearn::PLearner.

Definition at line 201 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References getTrainCostNames(), i, and PLearn::TVec< T >::length().

{
    TVec<string> costnames = getTrainCostNames();
    for(int i=0; i<costnames.length(); i++)
        if(costnames[i]==costname)
            return i;
    return -1;
}

Here is the call graph for this function:

TVec< string > PLearn::GaussianProcessRegressor::getTrainCostNames ( ) const [virtual]

This should return the names of the objective costs that the train method computes and for which it updates the VecStatsCollector train_stats.

Implements PLearn::PLearner.

Definition at line 181 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

Referenced by getTestCostNames(), and getTrainCostIndex().

{
    TVec<string> names(2);
    names[0]="log-likelihood";
    names[1]="mse";
    return names;
}

Here is the caller graph for this function:

virtual TVec<std::string> PLearn::GaussianProcessRegressor::getTrainCostNames ( ) const [virtual]

Returns the names of the objective costs that the train method computes and for which it updates the VecStatsCollector train_stats.

Implements PLearn::PLearner.

PP< GaussianProcessNLLVariable > PLearn::GaussianProcessRegressor::hyperOptimize ( const Mat inputs,
const Mat targets,
VarArray hyperparam_vars 
) [protected]

Optimize the hyperparameters if any.

Return a Variable on which train() carries out a final fprop for obtaining the final trained learner parameters.

Definition at line 742 of file regressors/GaussianProcessRegressor.cc.

References PLearn::TVec< T >::first(), i, j, PLearn::GaussianProcessNLLVariable::logVarray(), PLearn::PStream::plearn_ascii, and PLearn::tostring().

{
    // If there are no hyperparameters or optimizer, just create a simple
    // variable and return it right away.
    if (! m_optimizer || (m_hyperparameters.size() == 0 &&
                          m_ARD_hyperprefix_initval.first.empty()) )
    {
        return new GaussianProcessNLLVariable(
            m_kernel, m_weight_decay, inputs, targets,
            TVec<string>(), VarArray(), m_compute_confidence,
            m_save_gram_matrix, getExperimentDirectory());
    }

    // Otherwise create Vars that wrap each hyperparameter
    const int numhyper  = m_hyperparameters.size();
    const int numinputs = ( ! m_ARD_hyperprefix_initval.first.empty() ?
                            inputsize() : 0 );
    hyperparam_vars = VarArray(numhyper + numinputs);
    TVec<string> hyperparam_names(numhyper + numinputs);
    int i;
    for (i=0 ; i<numhyper ; ++i) {
        hyperparam_names[i] = m_hyperparameters[i].first;
        hyperparam_vars [i] = new ObjectOptionVariable(
            (Kernel*)m_kernel, m_hyperparameters[i].first, m_hyperparameters[i].second);
        hyperparam_vars[i]->setName(m_hyperparameters[i].first);
    }

    // If specified, create the Vars for automatic relevance determination
    string& ARD_name = m_ARD_hyperprefix_initval.first;
    string& ARD_init = m_ARD_hyperprefix_initval.second;
    if (! ARD_name.empty()) {
        // Small hack to ensure the ARD vector in the kernel has proper size
        Vec init(numinputs, lexical_cast<double>(ARD_init));
        m_kernel->changeOption(ARD_name, tostring(init, PStream::plearn_ascii));
        
        for (int j=0 ; j<numinputs ; ++j, ++i) {
            hyperparam_names[i] = ARD_name + '[' + tostring(j) + ']';
            hyperparam_vars [i] = new ObjectOptionVariable(
                (Kernel*)m_kernel, hyperparam_names[i], ARD_init);
            hyperparam_vars [i]->setName(hyperparam_names[i]);
        }
    }

    // Create the cost-function variable
    PP<GaussianProcessNLLVariable> nll = new GaussianProcessNLLVariable(
        m_kernel, m_weight_decay, inputs, targets, hyperparam_names,
        hyperparam_vars, true, m_save_gram_matrix, getExperimentDirectory());
    nll->setName("GaussianProcessNLLVariable");

    // Some logging about the initial values
    GaussianProcessNLLVariable::logVarray(hyperparam_vars,
                                          "Hyperparameter initial values:");
    
    // And optimize for nstages
    m_optimizer->setToOptimize(hyperparam_vars, (Variable*)nll);
    m_optimizer->build();
    PP<ProgressBar> pb(
        report_progress? new ProgressBar("Training GaussianProcessRegressor "
                                         "from stage " + tostring(stage) + " to stage " +
                                         tostring(nstages), nstages-stage)
        : 0);
    bool early_stopping = false;
    PP<VecStatsCollector> statscol = new VecStatsCollector;
    for (const int initial_stage = stage ; !early_stopping && stage < nstages
             ; ++stage)
    {
        if (pb)
            pb->update(stage - initial_stage);

        statscol->forget();
        early_stopping = m_optimizer->optimizeN(*statscol);
        statscol->finalize();
    }
    pb = 0;                                  // Finish progress bar right now

    // Some logging about the final values
    GaussianProcessNLLVariable::logVarray(hyperparam_vars,
                                          "Hyperparameter final values:");
    return nll;
}

Here is the call graph for this function:

void PLearn::GaussianProcessRegressor::inverseCovTimesVec ( real  sigma,
Vec  v,
Vec  Cinv_v 
) const [protected]

Definition at line 453 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References PLearn::dot(), eigenvalues, eigenvectors, i, PLearn::TMat< T >::length(), m, PLearn::multiply(), and PLearn::multiplyAdd().

Referenced by train().

{
    int m=eigenvectors.length();
    real one_over_sigma2 = 1.0/sigma2;
    multiply(u,one_over_sigma2,Cinv_u);
    for (int i=0;i<m;i++)
    {
        Vec v_i = eigenvectors(i);
        real proj = dot(v_i,u);
        multiplyAdd(Cinv_u, v_i, proj*(1.0/(eigenvalues[i]+sigma2)-one_over_sigma2), Cinv_u);
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

double PLearn::GaussianProcessRegressor::log_density ( const Vec x) const [virtual]

return log of probability density log(p(x))

Reimplemented from PLearn::PDistribution.

Definition at line 211 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References PLERROR.

{
    PLERROR("GaussianProcessRegressor::log_density not implemented yet");
    return 0;
}
virtual void PLearn::GaussianProcessRegressor::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Transforms a shallow copy into a deep copy.

Reimplemented from PLearn::PConditionalDistribution.

void PLearn::GaussianProcessRegressor::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Transforms a shallow copy into a deep copy.

Reimplemented from PLearn::PConditionalDistribution.

Definition at line 56 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References alpha, PLearn::deepCopyField(), eigenvalues, eigenvectors, K, kernel, Kxx, Kxxi, PLearn::PConditionalDistribution::makeDeepCopyFromShallowCopy(), meanK, and noise_sd.

{
    inherited::makeDeepCopyFromShallowCopy(copies);

    // ### Call deepCopyField on all "pointer-like" fields 
    // ### that you wish to be deepCopied rather than 
    // ### shallow-copied.
    // ### ex:
    deepCopyField(kernel, copies);
    deepCopyField(noise_sd, copies);
    deepCopyField(alpha, copies);
    deepCopyField(Kxxi, copies);
    deepCopyField(Kxx, copies);
    deepCopyField(K, copies);
    deepCopyField(eigenvectors, copies);
    deepCopyField(eigenvalues, copies);
    deepCopyField(meanK, copies);
}

Here is the call graph for this function:

virtual int PLearn::GaussianProcessRegressor::nTestCosts ( ) const [inline, virtual]

Caches getTestCostNames().size() in an internal variable the first time it is called, and then returns the content of this variable.

Reimplemented from PLearn::PLearner.

Definition at line 171 of file distributions/DEPRECATED/GaussianProcessRegressor.h.

{ return 2; }
virtual int PLearn::GaussianProcessRegressor::nTrainCosts ( ) const [inline, virtual]

Caches getTrainCostNames().size() in an internal variable the first time it is called, and then returns the content of this variable.

Reimplemented from PLearn::PLearner.

Definition at line 173 of file distributions/DEPRECATED/GaussianProcessRegressor.h.

{ return 2; }
virtual int PLearn::GaussianProcessRegressor::outputsize ( ) const [virtual]

Returns the size of this learner's output, (which typically may depend on its inputsize(), targetsize() and set options).

Implements PLearn::PLearner.

int PLearn::GaussianProcessRegressor::outputsize ( ) const [virtual]

SUBCLASS WRITING: override this so that it returns the size of this learner's output, as a function of its inputsize(), targetsize() and set options.

Implements PLearn::PLearner.

Definition at line 155 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References n_outputs, and PLearn::PDistribution::outputs_def.

Referenced by build_(), and computeCostsOnly().

{ 
    int output_size=0;
    if (outputs_def.find("e") != string::npos)
        output_size+=n_outputs;
    if (outputs_def.find("v") != string::npos)
        // we only compute a diagonal output variance
        output_size+=n_outputs;
    return output_size;
}

Here is the caller graph for this function:

real PLearn::GaussianProcessRegressor::QFormInverse ( real  sigma2,
Vec  u 
) const [protected]

Definition at line 466 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References PLearn::dot(), eigenvalues, eigenvectors, i, PLearn::TMat< T >::length(), m, and PLearn::norm().

Referenced by variance().

{
    int m=eigenvectors.length();
    real one_over_sigma2 = 1.0/sigma2;
    real qf = norm(u)*one_over_sigma2;
    for (int i=0;i<m;i++)
    {
        Vec v_i = eigenvectors(i);
        real proj = dot(v_i,u);
        qf += (1.0/(eigenvalues[i]+sigma2)-one_over_sigma2) * proj*proj;
    }
    return qf;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::GaussianProcessRegressor::setInput ( const Vec input) const [virtual]

Set the input part before using the inherited methods.

Reimplemented from PLearn::PConditionalDistribution.

Definition at line 75 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References Gram_matrix_normalization, i, kernel, Kxx, Kxxi, PLearn::TVec< T >::length(), PLearn::mean(), mean_allK, meanK, and PLearn::sqrt().

{
    // compute K(x,x_i)
    for (int i=0;i<Kxxi.length();i++)
        Kxxi[i]=kernel->evaluate_x_i(input,i);
    // compute K(x,x)
    Kxx = kernel->evaluate(input,input);
    // apply normalization
    if (Gram_matrix_normalization=="centering_a_dotproduct")
    {
        real kmean = mean(Kxxi);
        for (int i=0;i<Kxxi.length();i++)
            Kxxi[i] = Kxxi[i] - kmean - meanK[i] + mean_allK;
        Kxx = Kxx - kmean - kmean + mean_allK;
    } else if (Gram_matrix_normalization=="centering_a_distance")
    {
        real kmean = mean(Kxxi);
        for (int i=0;i<Kxxi.length();i++)
            Kxxi[i] = -0.5*(Kxxi[i] - kmean - meanK[i] + mean_allK);
        Kxx = -0.5*(Kxx - kmean - kmean + mean_allK);
    }
    else if (Gram_matrix_normalization=="divisive")
    {
        real kmean = mean(Kxxi);
        for (int i=0;i<Kxxi.length();i++)
            Kxxi[i] = Kxxi[i]/sqrt(kmean* meanK[i]);
        Kxx = Kxx/kmean;
    }
}

Here is the call graph for this function:

void PLearn::GaussianProcessRegressor::setTrainingSet ( VMat  training_set,
bool  call_forget = true 
) [virtual]

Isolate the training inputs and create and ExtendedVMatrix (to include a bias) if required.

Reimplemented from PLearn::PLearner.

Definition at line 344 of file regressors/GaussianProcessRegressor.cc.

References PLASSERT, PLERROR, PLearn::VMat::subMatColumns(), and PLearn::VMat::toMat().

{
    PLASSERT( training_set );
    int inputsize = training_set->inputsize() ;
    if (inputsize < 0)
        PLERROR("GaussianProcessRegressor::setTrainingSet: the training set inputsize "
                "must be specified (current value = %d)", inputsize);

    // Convert to a real matrix in order to make saving it saner
    m_training_inputs = training_set.subMatColumns(0, inputsize).toMat();
    inherited::setTrainingSet(training_set, call_forget);
}

Here is the call graph for this function:

virtual void PLearn::GaussianProcessRegressor::train ( ) [virtual]

The role of the train method is to bring the learner up to stage==nstages, updating the train_stats collector with training costs measured on-line in the process.

Implements PLearn::PLearner.

void PLearn::GaussianProcessRegressor::train ( ) [virtual]

The role of the train method is to bring the learner up to stage==nstages, updating the stats with training costs measured on-line in the process.

Implements PLearn::PLearner.

Definition at line 341 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References alpha, PLearn::columnMean(), eigenvalues, PLearn::eigenVecOfSymmMat(), eigenvectors, Gram_matrix_normalization, i, PLearn::PLearner::inputsize(), inverseCovTimesVec(), j, K, kernel, PLearn::TMat< T >::length(), m, max_nb_evectors, PLearn::mean(), mean_allK, meanK, PLearn::TMat< T >::mod(), n_outputs, noise_sd, PLearn::sqrt(), PLearn::VMat::subMatColumns(), PLearn::PLearner::targetsize(), PLearn::VMat::toMat(), PLearn::TMat< T >::toVec(), and PLearn::PLearner::train_set.

{
    // compute Gram matrix K
    int l=K.length();
    VMat input_rows = train_set.subMatColumns(0,inputsize());
    VMat target_rows = train_set.subMatColumns(inputsize(),targetsize());
    kernel->setDataForKernelMatrix(input_rows);
    kernel->computeGramMatrix(K);

    // SHOULD WE ADD THE NOISE VARIANCE BEFORE NORMALIZATION?

    // optionally "normalize" the gram matrix
    if (Gram_matrix_normalization=="centering_a_dotproduct")
    {
        columnMean(K,meanK);
        mean_allK = mean(meanK);
        int m=K.mod();
        real mean_allK = mean(meanK);
        for (int i=0;i<l;i++)
        {
            real* Ki = K[i];
            real* Kji_ = &K[0][i];
            for (int j=0;j<=i;j++,Kji_+=m)
            {
                real Kij = Ki[j] - meanK[i] - meanK[j] + mean_allK;
                Ki[j]=Kij;
                if (j<i)
                    *Kji_ =Kij;
            }
        }
    }
    else if (Gram_matrix_normalization=="centering_a_distance")
    {
        columnMean(K,meanK);
        mean_allK = mean(meanK);
        int m=K.mod();
        real mean_allK = mean(meanK);
        for (int i=0;i<l;i++)
        {
            real* Ki = K[i];
            real* Kji_ = &K[0][i];
            for (int j=0;j<=i;j++,Kji_+=m)
            {
                real Kij = -0.5*(Ki[j] - meanK[i] - meanK[j] + mean_allK);
                Ki[j]=Kij;
                if (j<i)
                    *Kji_ =Kij;
            }
        }
    }
    else if (Gram_matrix_normalization=="divisive")
    {
        columnMean(K,meanK);
        int m=K.mod();
        for (int i=0;i<l;i++)
        {
            real* Ki = K[i];
            real* Kji_ = &K[0][i];
            for (int j=0;j<=i;j++,Kji_+=m)
            {
                real Kij = Ki[j] / sqrt(meanK[i]*meanK[j]);
                Ki[j]=Kij;
                if (j<i)
                    *Kji_ =Kij;
            }
        }
    }
    // compute principal eigenvectors
    int n_components = max_nb_evectors<0 || max_nb_evectors>l ? l : max_nb_evectors;
    eigenVecOfSymmMat(K,n_components,eigenvalues,eigenvectors);
    // pre-compute alpha[i]=(K+noise_sd[i]^2 I)^{-1}*targets  for regression
    for (int i=0;i<n_outputs;i++)
    {
        VMat target_column = target_rows.subMatColumns(i,1);
        inverseCovTimesVec(noise_sd[i]*noise_sd[i],target_column.toMat().toVec(),alpha(i));
    }

}

Here is the call graph for this function:

void PLearn::GaussianProcessRegressor::trainProjectedProcess ( const Mat all_training_inputs,
const Mat sub_training_inputs,
const Mat all_training_targets 
) [protected]

Update the parameters required for the Projected Process approximation, assuming hyperparameters have already been optimized.

Definition at line 827 of file regressors/GaussianProcessRegressor.cc.

References PLearn::addToDiagonal(), PLearn::diag(), PLearn::endl(), i, PLearn::identityMatrix(), PLearn::lapackCholeskyDecompositionInPlace(), PLearn::lapackCholeskySolveInPlace(), PLearn::TMat< T >::length(), PLearn::max(), PLearn::mean(), n, PLASSERT, PLearn::productTransposeAcc(), PLearn::selectColumns(), PLearn::TVec< T >::size(), PLearn::TMat< T >::subMatRows(), PLearn::trace(), PLearn::transpose(), PLearn::transposeTransposeProduct(), and PLearn::TMat< T >::width().

{
    PLASSERT( m_kernel );
    const int activelength= m_active_set_indices.length();
    const int trainlength = all_training_inputs.length();
    const int targetsize  = all_training_targets.width();
    
    // The RHS matrix (when solving the linear system Gram*Params=RHS) is made
    // up of two parts: the regression targets themselves, and the identity
    // matrix if we requested them (for confidence intervals).  After solving
    // the linear system, set the gram-inverse appropriately.  To interface
    // nicely with LAPACK, we store this in a transposed format.
    int rhs_width = targetsize + (m_compute_confidence? activelength : 0);
    Mat tmp_rhs(rhs_width, activelength);
    if (m_compute_confidence) {
        Mat rhs_identity = tmp_rhs.subMatRows(targetsize, activelength);
        identityMatrix(rhs_identity);
    }

    // We always need to solve K_mm^-1.  Prepare the RHS with the identity
    // matrix to be ready to solve with a Cholesky decomposition.
    m_subgram_inverse.resize(activelength, activelength);
    Mat gram_cholesky(activelength, activelength);
    identityMatrix(m_subgram_inverse);
    
    // Compute Gram Matrix and add weight decay to diagonal.  This is done in a
    // few steps: (1) K_mm (using the active-set only), (2) then separately
    // compute K_mn (active-set by all examples), (3) computing the covariance
    // matrix of K_mn to give an m x m matrix, (4) and finally add them up.
    // cf. R&W p. 179, eq. 8.26 :: (sigma_n^2 K_mm + K_mn K_nm)
    m_kernel->setDataForKernelMatrix(all_training_inputs);
    Mat gram(activelength, activelength);
    Mat asym_gram(activelength, trainlength);
    Vec self_cov(activelength);
    m_kernel->computeTestGramMatrix(sub_training_inputs, asym_gram, self_cov);
    // Note: asym_gram contains K_mn without any sampling noise.

    // DBG_MODULE_LOG << "Asym_gram =\n" << asym_gram << endl;
    
    // Obtain K_mm, also without self-noise.  Add some jitter as per
    // the Rasmussen & Williams code
    selectColumns(asym_gram, m_active_set_indices, gram);
    real jitter = m_weight_decay * trace(gram);
    addToDiagonal(gram, jitter);

    // DBG_MODULE_LOG << "Kmm =\n" << gram << endl;
    
    // Obtain an estimate of the EFFECTIVE sampling noise from the
    // difference between self_cov and the diagonal of gram
    Vec sigma_sq = self_cov - diag(gram);
    for (int i=0, n=sigma_sq.size() ; i<n ; ++i) // ensure does not get negative
        sigma_sq[i] = max(m_weight_decay, sigma_sq[i]);
    double sigma_sq_est = mean(sigma_sq);
    // DBG_MODULE_LOG << "Sigma^2 estimate = " << sigma_sq_est << endl;

    // Before clobbering K_mm, compute its inverse.
    gram_cholesky << gram;
    lapackCholeskyDecompositionInPlace(gram_cholesky);
    lapackCholeskySolveInPlace(gram_cholesky, m_subgram_inverse,
                               true /* column-major */);
    
    gram *= sigma_sq_est;                            // sigma_n^2 K_mm
    productTransposeAcc(gram, asym_gram, asym_gram); // Inner part of eq. 8.26

    // DBG_MODULE_LOG << "Gram =\n" << gram << endl;
    
    // Dump a fragment of the Gram Matrix to the debug log
    DBG_MODULE_LOG << "Projected-process Gram fragment: "
                   << gram(0,0) << ' '
                   << gram(1,0) << ' '
                   << gram(1,1) << endl;

    // The RHS should contain (K_mn*y)' = y'*K_mn'.  Compute it.
    Mat targets_submat = tmp_rhs.subMatRows(0, targetsize);
    transposeTransposeProduct(targets_submat, all_training_targets, asym_gram);
    // DBG_MODULE_LOG << "Projected RHS =\n" << targets_submat << endl;
    
    // Compute Cholesky decomposition and solve the linear system.  LAPACK
    // solves in-place, but luckily we don't need either the Gram and RHS
    // matrices after solving.
    lapackCholeskyDecompositionInPlace(gram);
    lapackCholeskySolveInPlace(gram, tmp_rhs, true /* column-major */);

    // Transpose final result.  LAPACK solved in-place for tmp_rhs.
    m_alpha.resize(tmp_rhs.width(), tmp_rhs.length());
    transpose(tmp_rhs, m_alpha);
    if (m_compute_confidence) {
        m_gram_inverse = m_alpha.subMatColumns(targetsize, activelength);
        m_alpha        = m_alpha.subMatColumns(0, targetsize);

        // Absorb sigma^2 into gram_inverse as per eq. 8.27 of R&W
        m_gram_inverse *= sigma_sq_est;
    }
}

Here is the call graph for this function:

Mat PLearn::GaussianProcessRegressor::variance ( ) const [virtual]

return Var[X]

Definition at line 244 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References PLearn::TMat< T >::clear(), i, Kxx, Kxxi, PLearn::TMat< T >::length(), n_outputs, noise_sd, QFormInverse(), PLearn::TMat< T >::resize(), and PLearn::var().

Referenced by computeCostsFromOutputs(), and computeOutput().

{
    static Mat var;
    if (var.length()!=n_outputs)
    {
        var.resize(n_outputs,n_outputs);
        var.clear();
    }
    for (int i=0;i<n_outputs;i++)
    {
        real v = Kxx;
        v -= QFormInverse(noise_sd[i]*noise_sd[i],Kxxi);
        var(i,i) = v;
    }
    return var;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::GaussianProcessRegressor::variance ( Vec  diag_variances) const [virtual]

Definition at line 233 of file distributions/DEPRECATED/GaussianProcessRegressor.cc.

References i, Kxx, Kxxi, n_outputs, noise_sd, and QFormInverse().

{
    for (int i=0;i<n_outputs;i++)
    {
        real v = Kxx;
        v -= QFormInverse(noise_sd[i]*noise_sd[i],Kxxi);
        diag_variances[i] = v;
    }
}

Here is the call graph for this function:


Member Data Documentation

If a sparse approximation algorithm is used (e.g.

projected process), this specifies the indices of the training-set examples which should be considered to be part of the active set. Note that these indices must be SORTED IN INCREASING ORDER and should not contain duplicates.

Definition at line 209 of file regressors/GaussianProcessRegressor.h.

Matrix of learned parameters, determined from the equation.

(K + lambda I)^-1 y

(don't forget that y can be a matrix for multivariate output problems)

In the case of the projected-process approximation, this contains the result of the equiation

(lambda K_mm + K_mn K_nm)^-1 K_mn y

Definition at line 315 of file regressors/GaussianProcessRegressor.h.

If the kernel support automatic relevance determination (ARD; e.g.

SquaredExponentialARDKernel), the list of hyperparameters corresponding to each input can be created automatically by giving an option prefix and an initial value. The ARD options are created to have the form

'prefix[0]', 'prefix[1]', 'prefix[N-1]'

where N is the number of inputs. This option is useful when the dataset inputsize is not (easily) known ahead of time.

Definition at line 178 of file regressors/GaussianProcessRegressor.h.

Whether to perform the additional train-time computations required to compute confidence intervals.

This includes computing a separate inverse of the Gram matrix. Specification of this option is necessary for calling both computeConfidenceFromOutput and computeOutputCovMat.

Definition at line 147 of file regressors/GaussianProcessRegressor.h.

Small regularization to be added post-hoc to the computed output covariance matrix and confidence intervals; this is mostly used as a disaster prevention device, to avoid negative predictive variance.

Definition at line 154 of file regressors/GaussianProcessRegressor.h.

Buffer to hold the product of the gram inverse with gram_traintest_inputs.

Definition at line 357 of file regressors/GaussianProcessRegressor.h.

Inverse of the Gram matrix, used to compute confidence intervals (must be saved since the confidence intervals are obtained from the equation.

sigma^2 = k(x,x) - k(x)'(K + lambda I)^-1 k(x)

An adjustment similar to 'alpha' is made for the projected-process approximation.

Definition at line 326 of file regressors/GaussianProcessRegressor.h.

Buffer for the product of the gram inverse with kernel evaluations.

Definition at line 347 of file regressors/GaussianProcessRegressor.h.

Buffer to hold the Gram matrix of train inputs with test inputs.

Element i,j contains K(test(i), train(j)).

Definition at line 354 of file regressors/GaussianProcessRegressor.h.

List of hyperparameters to optimize.

They must be specified in the form "option-name":initial-value, where 'option-name' is the name of an option to set within the Kernel object (the array-index form 'option[i]' is supported), and 'initial-value' is the (PLearn-serialization string representation) for starting point for the optimization. Currently, the hyperparameters are constrained to be scalars.

Definition at line 165 of file regressors/GaussianProcessRegressor.h.

Whether to include a bias term in the regression (true by default).

The effect of this option is NOT to prepend a column of 1 to the inputs (which has often no effect for GP regression), but to estimate a separate mean of the targets, perform the GP regression on the zero-mean targets, and add it back when computing the outputs.

Definition at line 139 of file regressors/GaussianProcessRegressor.h.

Buffer to hold confidence intervals when computing costs from outputs.

Definition at line 350 of file regressors/GaussianProcessRegressor.h.

Kernel to use for the computation.

This must be a similarity kernel (i.e. closer vectors give higher kernel evaluations).

Definition at line 123 of file regressors/GaussianProcessRegressor.h.

Buffer for kernel evaluations at test time.

Definition at line 344 of file regressors/GaussianProcessRegressor.h.

Specification of the optimizer to use for train-time hyperparameter optimization.

A ConjGradientOptimizer should be an adequate choice.

Definition at line 184 of file regressors/GaussianProcessRegressor.h.

If true, the Gram matrix is saved before undergoing Cholesky each decomposition; useful for debugging if the matrix is quasi-singular.

It is saved in the current expdir under the names 'gram_matrix_N.pmat' where N is an increasing counter.

Definition at line 192 of file regressors/GaussianProcessRegressor.h.

Buffer to hold the sigma reductor for m_gram_inverse_product.

Definition at line 360 of file regressors/GaussianProcessRegressor.h.

Solution algorithm used for the regression.

If "exact", use the exact Gaussian process solution (requires O(N^3) computation). If "projected-process", use the PP approximation, which requires O(MN^2) computation, where M is given by the size of the active training examples specified by the "active-set" option. Default="exact".

Definition at line 201 of file regressors/GaussianProcessRegressor.h.

Inverse of the sub-Gram matrix, i.e.

K_mm^-1. Used only with the projected-process approximation.

Definition at line 332 of file regressors/GaussianProcessRegressor.h.

Mean of the targets, if the option 'include_bias' is true.

Definition at line 335 of file regressors/GaussianProcessRegressor.h.

Saved version of the training set inputs, which must be kept along for carrying out kernel evaluations with the test point.

If using the projected-process approximation, only the inputs in the active set are saved.

Definition at line 341 of file regressors/GaussianProcessRegressor.h.

Weight-decay coefficient (default = 0).

This is the lambda parameter in the class-help explanations, and corresponds to the variance of the observation noise in the gaussian process generative model.

Definition at line 130 of file regressors/GaussianProcessRegressor.h.

Definition at line 91 of file distributions/DEPRECATED/GaussianProcessRegressor.h.

Referenced by declareOptions(), and train().

Definition at line 103 of file distributions/DEPRECATED/GaussianProcessRegressor.h.

Referenced by setInput(), and train().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines