PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Static Protected Member Functions | Protected Attributes | Private Types | Private Member Functions
PLearn::GaussianContinuum Class Reference

#include <GaussianContinuum.h>

Inheritance diagram for PLearn::GaussianContinuum:
Inheritance graph
[legend]
Collaboration diagram for PLearn::GaussianContinuum:
Collaboration graph
[legend]

List of all members.

Public Member Functions

 GaussianContinuum ()
 Default constructor.
virtual void build ()
 Simply calls inherited::build() then build_().
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual GaussianContinuumdeepCopy (CopiesMap &copies) const
virtual int outputsize () const
 Returns the size of this learner's output, (which typically may depend on its inputsize(), targetsize() and set options).
virtual void forget ()
 (Re-)initializes the PLearner in its fresh state (that state may depend on the 'seed' option) And sets 'stage' back to 0 (this is the stage of a fresh learner!).
virtual void initializeParams ()
virtual void train ()
 The role of the train method is to bring the learner up to stage==nstages, updating the train_stats collector with training costs measured on-line in the process.
virtual void computeOutput (const Vec &input, Vec &output) const
 Computes the output from the input.
virtual void computeCostsFromOutputs (const Vec &input, const Vec &output, const Vec &target, Vec &costs) const
 Computes the costs from already computed output.
virtual TVec< string > getTestCostNames () const
 Returns the names of the costs computed by computeCostsFromOutpus (and thus the test method).
virtual TVec< string > getTrainCostNames () const
 Returns the names of the objective costs that the train method computes and for which it updates the VecStatsCollector train_stats.

Static Public Member Functions

static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

real weight_mu_and_tangent
bool include_current_point
real random_walk_step_prop
bool use_noise
bool use_noise_direction
real noise
string noise_type
int n_random_walk_step
int n_random_walk_per_point
bool save_image_mat
bool walk_on_noise
VMat image_points_vmat
Mat image_points_mat
Mat image_prob_mat
TMat< intimage_nearest_neighbors
real upper_y
real lower_y
real upper_x
real lower_x
int points_per_dim
real min_sigma
real min_diff
real min_p_x
bool print_parameters
bool sm_bigger_than_sn
int n_neighbors
int n_neighbors_density
int mu_n_neighbors
int n_dim
int compute_cost_every_n_epochs
string variances_transfer_function
real validation_prop
PP< Optimizeroptimizer
Var embedding
Func output_f
Func output_f_all
Func predictor
Func projection_error_f
Func noisy_data
string architecture_type
string output_type
int n_hidden_units
int batch_size
real norm_penalization
real svd_threshold

Static Public Attributes

static StaticInitializer _static_initializer_

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares this class' options.

Protected Attributes

int n
Func cost_of_one_example
Var x
Var noise_var
Var b
Var W
Var c
Var V
Var muV
Var smV
Var smb
Var snV
Var snb
Var tangent_targets
Var tangent_targets_and_point
Var tangent_plane
Var mu
Var sm
Var sn
Var mu_noisy
Var p_x
Var p_target
Var p_neighbors
Var p_neighbors_and_point
Var target_index
Var neigbor_indexes
Var sum_nll
Var min_sig
Var min_d
PP< PDistributiondist
VMat valid_set
Array< VMatith_step_generated_set
VMat train_and_generated_set
VMat reference_set
TMat< inttrain_nearest_neighbors
TMat< intvalidation_nearest_neighbors
TVec< MatBs
TVec< MatFs
Mat mus
Vec sms
Vec sns
Mat Ut_svd
Mat V_svd
Vec S_svd
Vec z
Vec zm
Vec zn
Vec x_minus_neighbor
Vec w
Vec t_row
Vec neighbor_row
TVec< intt_nn
Vec t_dist
Mat distances
DistanceKernel dk
real best_validation_cost
VarArray parameters

Private Types

typedef PLearner inherited

Private Member Functions

void build_ ()
 This does the actual building.
void compute_train_and_validation_costs ()
void make_random_walk ()
void update_reference_set_parameters ()
void knn (const VMat &vm, const Vec &x, const int &k, TVec< int > &neighbors, bool sortk) const
void get_image_matrix (VMat points, VMat image_points_vmat, int begin, string file_path, int n_near_neigh)
real get_nll (VMat points, VMat image_points_vmat, int begin, int n_near_neigh)

Detailed Description

Definition at line 56 of file GaussianContinuum.h.


Member Typedef Documentation

Reimplemented from PLearn::PLearner.

Definition at line 61 of file GaussianContinuum.h.


Constructor & Destructor Documentation

PLearn::GaussianContinuum::GaussianContinuum ( )

Member Function Documentation

string PLearn::GaussianContinuum::_classname_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 157 of file GaussianContinuum.cc.

OptionList & PLearn::GaussianContinuum::_getOptionList_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 157 of file GaussianContinuum.cc.

RemoteMethodMap & PLearn::GaussianContinuum::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 157 of file GaussianContinuum.cc.

bool PLearn::GaussianContinuum::_isa_ ( const Object o) [static]

Reimplemented from PLearn::PLearner.

Definition at line 157 of file GaussianContinuum.cc.

Object * PLearn::GaussianContinuum::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::Object.

Definition at line 157 of file GaussianContinuum.cc.

StaticInitializer GaussianContinuum::_static_initializer_ & PLearn::GaussianContinuum::_static_initialize_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 157 of file GaussianContinuum.cc.

void PLearn::GaussianContinuum::build ( ) [virtual]

Simply calls inherited::build() then build_().

Reimplemented from PLearn::PLearner.

Definition at line 1259 of file GaussianContinuum.cc.

References PLearn::PLearner::build(), and build_().

Here is the call graph for this function:

void PLearn::GaussianContinuum::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::PLearner.

Definition at line 573 of file GaussianContinuum.cc.

References a, architecture_type, b, best_validation_cost, Bs, c, PLearn::TMat< T >::clear(), PLearn::TVec< T >::clear(), cost_of_one_example, PLearn::diagonalized_factors_product(), PLearn::diff(), dist, embedding, PLearn::exp(), Fs, i, include_current_point, PLearn::PLearner::inputsize_, knn(), PLearn::VMat::length(), min_d, min_diff, min_p_x, min_sig, min_sigma, mu, mu_n_neighbors, mu_noisy, mus, muV, n, n_dim, n_hidden_units, n_neighbors, n_neighbors_density, neighbor_row, PLearn::VarArray::nelems(), PLearn::nll_semispherical_gaussian(), PLearn::no_bprop(), noise, noise_type, noise_var, noisy_data, output_f_all, p_neighbors, p_neighbors_and_point, p_target, p_x, parameters, PLERROR, PLearn::pownorm(), predictor, PLearn::product(), reference_set, PLearn::reshape(), PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), sm, sm_bigger_than_sn, smb, sms, smV, sn, snb, sns, snV, PLearn::softplus(), PLearn::square(), sum_nll, svd_threshold, t_row, tangent_plane, tangent_targets, tangent_targets_and_point, PLearn::tanh(), target_index, train_nearest_neighbors, PLearn::PLearner::train_set, PLearn::transpose(), use_noise, use_noise_direction, Ut_svd, V, V_svd, valid_set, validation_nearest_neighbors, validation_prop, variances_transfer_function, PLearn::vconcat(), w, W, weight_mu_and_tangent, x, x_minus_neighbor, z, zm, and zn.

Referenced by build().

{

  n = PLearner::inputsize_;

  if (n>0)
  {
    Var log_n_examples(1,1,"log(n_examples)");


    {
      if (n_hidden_units <= 0)
        PLERROR("GaussianContinuum::Number of hidden units should be positive, now %d\n",n_hidden_units);

      if(validation_prop <= 0 || validation_prop >= 1) valid_set = train_set;
      else
      {
        // Making FractionSplitter
        PP<FractionSplitter> fsplit = new FractionSplitter();
        TMat<pair<real,real> > splits(1,2); 
        splits(0,0).first = 0; splits(0,0).second = 1-validation_prop;
        splits(0,1).first = 1-validation_prop; splits(0,1).second = 1;
        fsplit->splits = splits;
        fsplit->build();
      
        // Making RepeatSplitter
        PP<RepeatSplitter> rsplit = new RepeatSplitter();
        rsplit->n = 1;
        rsplit->shuffle = true;
        rsplit->seed = 123456;
        rsplit->to_repeat = fsplit;
        rsplit->setDataSet(train_set);
        rsplit->build();

        TVec<VMat> vmat_splits = rsplit->getSplit();
        train_set = vmat_splits[0];
        valid_set = vmat_splits[1];
      
      }

      x = Var(n);
      c = Var(n_hidden_units,1,"c ");
      V = Var(n_hidden_units,n,"V ");               
      Var a = tanh(c + product(V,x));
      muV = Var(n,n_hidden_units,"muV "); 
      smV = Var(1,n_hidden_units,"smV ");  
      smb = Var(1,1,"smB ");
      snV = Var(1,n_hidden_units,"snV ");  
      snb = Var(1,1,"snB ");      
        

      if(architecture_type == "embedding_neural_network")
      {
        W = Var(n_dim,n_hidden_units,"W ");       
        tangent_plane = diagonalized_factors_product(W,1-a*a,V); 
        embedding = product(W,a);
      } 
      else if(architecture_type == "single_neural_network")
      {
        b = Var(n_dim*n,1,"b");
        W = Var(n_dim*n,n_hidden_units,"W ");
        tangent_plane = reshape(b + product(W,tanh(c + product(V,x))),n_dim,n);
      }
      else
        PLERROR("GaussianContinuum::build_, unknown architecture_type option %s",
                architecture_type.c_str());
     
      mu = product(muV,a); 
      min_sig = new SourceVariable(1,1);
      min_sig->value[0] = min_sigma;
      min_sig->setName("min_sig");
      min_d = new SourceVariable(1,1);
      min_d->value[0] = min_diff;
      min_d->setName("min_d");

      if(noise > 0)
      {
        if(noise_type == "uniform")
        {
          PP<UniformDistribution> temp = new UniformDistribution();
          Vec lower_noise(n);
          Vec upper_noise(n);
          for(int i=0; i<n; i++)
          {
            lower_noise[i] = -1*noise;
            upper_noise[i] = noise;
          }
          temp->min = lower_noise;
          temp->max = upper_noise;
          dist = temp;
        }
        else if(noise_type == "gaussian")
        {
          PP<GaussianDistribution> temp = new GaussianDistribution();
          Vec mu(n); mu.clear();
          Vec eig_values(n); 
          Mat eig_vectors(n,n); eig_vectors.clear();
          for(int i=0; i<n; i++)
          {
            eig_values[i] = noise; // maybe should be adjusted to the sigma noiseat the input
            eig_vectors(i,i) = 1.0;
          }
          temp->mu = mu;
          temp->eigenvalues = eig_values;
          temp->eigenvectors = eig_vectors;
          dist = temp;
        }
        else PLERROR("In GaussianContinuum::build_() : noise_type %c not defined",noise_type.c_str());
        noise_var = new PDistributionVariable(x,dist);
        if(use_noise_direction)
        {
          for(int k=0; k<n_dim; k++)
          {
            Var index_var = new SourceVariable(1,1);
            index_var->value[0] = k;
            Var f_k = new VarRowVariable(tangent_plane,index_var);
            noise_var = noise_var - product(f_k,noise_var)* transpose(f_k)/pownorm(f_k,2);
          }
        }
        noise_var = no_bprop(noise_var);
        noise_var->setName(noise_type);
      }
      else
      {
        noise_var = new SourceVariable(n,1);
        noise_var->setName("no noise");
        for(int i=0; i<n; i++)
          noise_var->value[i] = 0;
      }


      // Path for noisy mu
      Var a_noisy = tanh(c + product(V,x+noise_var));
      mu_noisy = product(muV,a_noisy); 

      if(sm_bigger_than_sn)
      {
        if(variances_transfer_function == "softplus") sn = softplus(snb + product(snV,a)) + min_sig;
        else if(variances_transfer_function == "square") sn = square(snb + product(snV,a)) + min_sig;
        else if(variances_transfer_function == "exp") sn = exp(snb + product(snV,a)) + min_sig;
        else PLERROR("In GaussianContinuum::build_ : unknown variances_transfer_function option %s ", variances_transfer_function.c_str());
        Var diff;
        
        if(variances_transfer_function == "softplus") diff = softplus(smb + product(smV,a)) + min_d;
        else if(variances_transfer_function == "square") diff = square(smb + product(smV,a)) + min_d;
        else if(variances_transfer_function == "exp") diff = exp(smb + product(smV,a)) + min_d;
        sm = sn + diff;
      }
      else
      {
        if(variances_transfer_function == "softplus"){
          sm = softplus(smb + product(smV,a)) + min_sig; 
          sn = softplus(snb + product(snV,a)) + min_sig;
        }
        else if(variances_transfer_function == "square"){
          sm = square(smb + product(smV,a)) + min_sig; 
          sn = square(snb + product(snV,a)) + min_sig;
        }
        else if(variances_transfer_function == "exp"){
          sm = exp(smb + product(smV,a)) + min_sig; 
          sn = exp(snb + product(snV,a)) + min_sig;
        }
        else PLERROR("In GaussianContinuum::build_ : unknown variances_transfer_function option %s ", variances_transfer_function.c_str());
      }
      
      mu_noisy->setName("mu_noisy ");
      tangent_plane->setName("tangent_plane ");
      mu->setName("mu ");
      sm->setName("sm ");
      sn->setName("sn ");
      a_noisy->setName("a_noisy ");
      a->setName("a ");
      if(architecture_type == "embedding_neural_network")
        embedding->setName("embedding ");
      x->setName("x ");

      if(architecture_type == "embedding_neural_network")
        predictor = Func(x, W & c & V & muV & smV & smb & snV & snb, tangent_plane & mu & sm & sn);
      if(architecture_type == "single_neural_network")
        predictor = Func(x, b & W & c & V & muV & smV & smb & snV & snb, tangent_plane & mu & sm & sn);
      /*
      if (output_type=="tangent_plane")
        output_f = Func(x, tangent_plane);
      else if (output_type=="embedding")
      {
        if(architecture_type == "single_neural_network")
          PLERROR("Cannot obtain embedding with single_neural_network architecture");
        output_f = Func(x, embedding);
      }
      else if (output_type=="tangent_plane+embedding")
      {
        if(architecture_type == "single_neural_network")
          PLERROR("Cannot obtain embedding with single_neural_network architecture");
        output_f = Func(x, tangent_plane & embedding);
      }
      else if(output_type == "tangent_plane_variance_normalized")
        output_f = Func(x,tangent_plane & sm);
      else if(output_type == "semispherical_gaussian_parameters")
        output_f = Func(x,tangent_plane & mu & sm & sn);
      */
      output_f_all = Func(x,tangent_plane & mu & sm & sn);
    }
    

    if (parameters.size()>0 && parameters.nelems() == predictor->parameters.nelems())
      predictor->parameters.copyValuesFrom(parameters);
    parameters.resize(predictor->parameters.size());
    for (int i=0;i<parameters.size();i++)
      parameters[i] = predictor->parameters[i];

    Var target_index = Var(1,1);
    target_index->setName("target_index");
    Var neighbor_indexes = Var(n_neighbors,1);
    neighbor_indexes->setName("neighbor_indexes");
    p_x = Var(train_set->length(),1);
    p_x->setName("p_x");
    p_target = new VarRowsVariable(p_x,target_index);
    p_target->setName("p_target");
    p_neighbors =new VarRowsVariable(p_x,neighbor_indexes);
    p_neighbors->setName("p_neighbors");

    tangent_targets = Var(n_neighbors,n);
    if(include_current_point)
    {
      Var temp = new SourceVariable(1,n);
      temp->value.fill(0);
      tangent_targets_and_point = vconcat(temp & tangent_targets);
      p_neighbors_and_point = vconcat(p_target & p_neighbors);
    }
    else
    {
      tangent_targets_and_point = tangent_targets;
      p_neighbors_and_point = p_neighbors;
    }
    
    if(mu_n_neighbors < 0 ) mu_n_neighbors = n_neighbors;

    // compute - log ( sum_{neighbors of x} P(neighbor|x) ) according to semi-spherical model
    Var nll = nll_semispherical_gaussian(tangent_plane, mu, sm, sn, tangent_targets_and_point, p_target, p_neighbors_and_point, noise_var, mu_noisy,
                                         use_noise, svd_threshold, min_p_x, mu_n_neighbors); // + log_n_examples;
    //nll_f = Func(tangent_plane & mu & sm & sn & tangent_targets, nll);
    Var knn = new SourceVariable(1,1);
    knn->setName("knn");
    knn->value[0] = n_neighbors + (include_current_point ? 1 : 0);

    if(weight_mu_and_tangent != 0)
    {
      sum_nll = new ColumnSumVariable(nll) / knn + weight_mu_and_tangent * ((Var) new RowSumVariable(square(product(no_bprop(tangent_plane),mu_noisy))));
    }
    else
      sum_nll = new ColumnSumVariable(nll) / knn;

    cost_of_one_example = Func(x & tangent_targets & target_index & neighbor_indexes, predictor->parameters, sum_nll);
    noisy_data = Func(x,x + noise_var);    // Func to verify what's the noisy data like (doesn't work so far, this problem will be investigated)
    //verify_gradient_func = Func(predictor->inputs & tangent_targets & target_index & neighbor_indexes, predictor->parameters & mu_noisy, sum_nll);  

    if(n_neighbors_density > train_set.length() || n_neighbors_density < 0) n_neighbors_density = train_set.length();

    best_validation_cost = REAL_MAX;

    train_nearest_neighbors.resize(train_set.length(), n_neighbors_density-1);
    validation_nearest_neighbors.resize(valid_set.length(), n_neighbors_density);

    t_row.resize(n);
    Ut_svd.resize(n,n);
    V_svd.resize(n_dim,n_dim);
    z.resize(n);
    zm.resize(n);
    zn.resize(n);
    x_minus_neighbor.resize(n);
    neighbor_row.resize(n);
    w.resize(n_dim);

    Bs.resize(train_set.length());
    Fs.resize(train_set.length());
    mus.resize(train_set.length(), n);
    sms.resize(train_set.length());
    sns.resize(train_set.length());
    
    reference_set = train_set;
  }

}

Here is the call graph for this function:

Here is the caller graph for this function:

string PLearn::GaussianContinuum::classname ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 157 of file GaussianContinuum.cc.

void PLearn::GaussianContinuum::compute_train_and_validation_costs ( ) [private]

Definition at line 1187 of file GaussianContinuum.cc.

References Bs, PLearn::endl(), PLearn::exp(), Fs, PLearn::VMat::length(), PLearn::log(), Log2Pi, mu, mus, n, n_dim, n_neighbors_density, neighbor_row, output_f_all, p_x, PLearn::pownorm(), print_parameters, PLearn::product(), sm, sms, sn, sns, PLearn::substract(), t_row, tangent_plane, train_nearest_neighbors, PLearn::PLearner::train_set, PLearn::transposeProduct(), update_reference_set_parameters(), valid_set, validation_nearest_neighbors, PLearn::PLearner::verbosity, w, PLearn::TMat< T >::width(), x, x_minus_neighbor, z, zm, and zn.

Referenced by train().

{
  update_reference_set_parameters();

  // estimate p(x) for the training set

  real nll_train = 0;

  for(int t=0; t<train_set.length(); t++)
  {

    train_set->getRow(t,t_row);
    p_x->value[t] = 0;
    // fetching nearest neighbors for density estimation
    for(int neighbor=0; neighbor<train_nearest_neighbors.width(); neighbor++)
    {
      train_set->getRow(train_nearest_neighbors(t,neighbor),neighbor_row);
      substract(t_row,neighbor_row,x_minus_neighbor);
      substract(x_minus_neighbor,mus(train_nearest_neighbors(t,neighbor)),z);
      product(w, Bs[train_nearest_neighbors(t,neighbor)], z);
      transposeProduct(zm, Fs[train_nearest_neighbors(t,neighbor)], w);
      substract(z,zm,zn);
      p_x->value[t] += exp(-0.5*(pownorm(zm,2)/sms[train_nearest_neighbors(t,neighbor)] + pownorm(zn,2)/sns[train_nearest_neighbors(t,neighbor)] 
                         + n_dim*log(sms[train_nearest_neighbors(t,neighbor)]) + (n-n_dim)*log(sns[train_nearest_neighbors(t,neighbor)])) - n/2.0 * Log2Pi);
    }
    p_x->value[t] /= train_set.length();
    nll_train -= log(p_x->value[t]);

    if(print_parameters)
    {
      output_f_all(t_row);
      cout << "data point = " << x->value << " parameters = " << tangent_plane->value << " " << mu->value << " " << sm->value << " " << sn->value << " p(x) = " << p_x->value[t] << endl;
    }
  }

  nll_train /= train_set.length();

  if(verbosity > 2) cout << "NLL train = " << nll_train << endl;

  // estimate p(x) for the validation set

  real nll_validation = 0;

  for(int t=0; t<valid_set.length(); t++)
  {

    valid_set->getRow(t,t_row);
    real this_p_x = 0;
    // fetching nearest neighbors for density estimation
    for(int neighbor=0; neighbor<n_neighbors_density; neighbor++)
    {
      train_set->getRow(validation_nearest_neighbors(t,neighbor), neighbor_row);
      substract(t_row,neighbor_row,x_minus_neighbor);
      substract(x_minus_neighbor,mus(validation_nearest_neighbors(t,neighbor)),z);
      product(w, Bs[validation_nearest_neighbors(t,neighbor)], z);
      transposeProduct(zm, Fs[validation_nearest_neighbors(t,neighbor)], w);
      substract(z,zm,zn);
      this_p_x += exp(-0.5*(pownorm(zm,2)/sms[validation_nearest_neighbors(t,neighbor)] + pownorm(zn,2)/sns[validation_nearest_neighbors(t,neighbor)] 
                         + n_dim*log(sms[validation_nearest_neighbors(t,neighbor)]) + (n-n_dim)*log(sns[validation_nearest_neighbors(t,neighbor)])) - n/2.0 * Log2Pi);
    }

    this_p_x /= train_set.length();  // When points will be added using a random walk, this will need to be changed (among other things...)
    nll_validation -= log(this_p_x);
  }

  nll_validation /= valid_set.length();

  if(verbosity > 2) cout << "NLL validation = " << nll_validation << endl;

}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::GaussianContinuum::computeCostsFromOutputs ( const Vec input,
const Vec output,
const Vec target,
Vec costs 
) const [virtual]

Computes the costs from already computed output.

Implements PLearn::PLearner.

Definition at line 1637 of file GaussianContinuum.cc.

References PLearn::log().

{
  costs[0] = -log(output[0]);
}                                

Here is the call graph for this function:

void PLearn::GaussianContinuum::computeOutput ( const Vec input,
Vec output 
) const [virtual]

Computes the output from the input.

Reimplemented from PLearn::PLearner.

Definition at line 1590 of file GaussianContinuum.cc.

References Bs, PLearn::exp(), Fs, knn(), PLearn::VMat::length(), PLearn::TVec< T >::length(), PLearn::log(), Log2Pi, mus, n, n_dim, n_neighbors_density, neighbor_row, PLearn::pownorm(), PLearn::product(), reference_set, sms, sns, PLearn::substract(), t_nn, t_row, PLearn::transposeProduct(), w, x_minus_neighbor, z, zm, and zn.

{
  // compute density
  real ret = 0;

  // fetching nearest neighbors for density estimation
  knn(reference_set,input,n_neighbors_density,t_nn,bool(0));
  t_row << input;
  for(int neighbor=0; neighbor<t_nn.length(); neighbor++)
  {
    reference_set->getRow(t_nn[neighbor],neighbor_row);
    substract(t_row,neighbor_row,x_minus_neighbor);
    substract(x_minus_neighbor,mus(t_nn[neighbor]),z);
    product(w, Bs[t_nn[neighbor]], z);
    transposeProduct(zm, Fs[t_nn[neighbor]], w);
    substract(z,zm,zn);
    ret += exp(-0.5*(pownorm(zm,2)/sms[t_nn[neighbor]] + pownorm(zn,2)/sns[t_nn[neighbor]] 
                               + n_dim*log(sms[t_nn[neighbor]]) + (n-n_dim)*log(sns[t_nn[neighbor]])) - n/2.0 * Log2Pi);
  }
  ret /= reference_set.length();
  output[0] = ret;
  /*
  if(output_type == "tangent_plane_variance_normalized")
  {
    int nout = outputsize()+1;
    Vec temp_output(nout);
    temp_output << output_f(input);
    Mat F = temp_output.subVec(0,temp_output.length()-1).toMat(n_dim,n);
    if(n_dim*n != temp_output.length()-1) PLERROR("WHAT!!!");
    for(int i=0; i<F.length(); i++)
    {
      real norm = pownorm(F(i),1);
      F(i) *= sqrt(temp_output[temp_output.length()-1])/norm;
    }
    
    output.resize(temp_output.length()-1);
    output << temp_output.subVec(0,temp_output.length()-1);
  }
  else
  {
    int nout = outputsize();
    output.resize(nout);
    output << output_f(input);
  }
  */
}    

Here is the call graph for this function:

void PLearn::GaussianContinuum::declareOptions ( OptionList ol) [static, protected]

Declares this class' options.

Reimplemented from PLearn::PLearner.

Definition at line 381 of file GaussianContinuum.cc.

References architecture_type, batch_size, Bs, PLearn::OptionBase::buildoption, compute_cost_every_n_epochs, PLearn::declareOption(), PLearn::PLearner::declareOptions(), Fs, include_current_point, PLearn::OptionBase::learntoption, lower_x, lower_y, min_diff, min_p_x, min_sigma, mu_n_neighbors, mus, n_dim, n_hidden_units, n_neighbors, n_neighbors_density, n_random_walk_per_point, n_random_walk_step, noise, noise_type, optimizer, parameters, points_per_dim, print_parameters, random_walk_step_prop, reference_set, save_image_mat, sm_bigger_than_sn, sms, sns, svd_threshold, upper_x, upper_y, use_noise, use_noise_direction, validation_prop, variances_transfer_function, walk_on_noise, and weight_mu_and_tangent.

{
  // ### Declare all of this object's options here
  // ### For the "flags" of each option, you should typically specify  
  // ### one of OptionBase::buildoption, OptionBase::learntoption or 
  // ### OptionBase::tuningoption. Another possible flag to be combined with
  // ### is OptionBase::nosave

  declareOption(ol, "weight_mu_and_tangent", &GaussianContinuum::weight_mu_and_tangent, OptionBase::buildoption,
                "Weight of the cost on the scalar product between the manifold directions and mu.\n"
                );

  declareOption(ol, "include_current_point", &GaussianContinuum::include_current_point, OptionBase::buildoption,
                "Indication that the current point should be included in the nearest neighbors.\n"
                );

  declareOption(ol, "n_neighbors", &GaussianContinuum::n_neighbors, OptionBase::buildoption,
                "Number of nearest neighbors to consider for gradient descent.\n"
                );

  declareOption(ol, "n_neighbors_density", &GaussianContinuum::n_neighbors_density, OptionBase::buildoption,
                "Number of nearest neighbors to consider for p(x) density estimation.\n"
                );

  declareOption(ol, "mu_n_neighbors", &GaussianContinuum::mu_n_neighbors, OptionBase::buildoption,
                "Number of nearest neighbors to learn the mus (if < 0, mu_n_neighbors = n_neighbors).\n"
                );

  declareOption(ol, "n_dim", &GaussianContinuum::n_dim, OptionBase::buildoption,
                "Number of tangent vectors to predict.\n"
                );

  declareOption(ol, "compute_cost_every_n_epochs", &GaussianContinuum::compute_cost_every_n_epochs, OptionBase::buildoption,
                "Frequency of the computation of the cost on the training and validation set. \n"
                );

  declareOption(ol, "optimizer", &GaussianContinuum::optimizer, OptionBase::buildoption,
                "Optimizer that optimizes the cost function.\n"
                );
                  
  declareOption(ol, "variances_transfer_function", &GaussianContinuum::variances_transfer_function, 
                OptionBase::buildoption,
                "Type of output transfer function for predicted variances, to force them to be >0:\n"
                "  square : take the square\n"
                "  exp : apply the exponential\n"
                "  softplus : apply the function log(1+exp(.))\n"
                );
                  
  declareOption(ol, "architecture_type", &GaussianContinuum::architecture_type, OptionBase::buildoption,
                "For pre-defined tangent_predictor types: \n"
                "   single_neural_network : prediction = b + W*tanh(c + V*x), where W has n_hidden_units columns\n"
                "                          where the resulting vector is viewed as a n_dim by n matrix\n"
    "   embedding_neural_network: prediction[k,i] = d(e[k]/d(x[i), where e(x) is an ordinary neural\n"
    "                             network representing the embedding function (see output_type option)\n"
                "where (b,W,c,V) are parameters to be optimized.\n"
                );

  declareOption(ol, "n_hidden_units", &GaussianContinuum::n_hidden_units, OptionBase::buildoption,
                "Number of hidden units (if architecture_type is some kind of neural network)\n"
                );
/*
  declareOption(ol, "output_type", &GaussianContinuum::output_type, OptionBase::buildoption,
                "Default value (the only one considered if architecture_type != embedding_*) is\n"
    "   tangent_plane: output the predicted tangent plane.\n"
    "   embedding: output the embedding vector (only if architecture_type == embedding_*).\n"
    "   tangent_plane+embedding: output both (in this order).\n"
                );
*/
 
  declareOption(ol, "batch_size", &GaussianContinuum::batch_size, OptionBase::buildoption, 
                "    how many samples to use to estimate the average gradient before updating the weights\n"
                "    0 is equivalent to specifying training_set->length() \n");

  declareOption(ol, "svd_threshold", &GaussianContinuum::svd_threshold, OptionBase::buildoption,
                "Threshold to accept singular values of F in solving for linear combination weights on tangent subspace.\n"
                );

  declareOption(ol, "print_parameters", &GaussianContinuum::print_parameters, OptionBase::buildoption,
                "Indication that the parameters should be printed for the training set points.\n"
                );

   declareOption(ol, "sm_bigger_than_sn", &GaussianContinuum::sm_bigger_than_sn, OptionBase::buildoption,
                "Indication that sm should always be bigger than sn.\n"
                );

  declareOption(ol, "save_image_mat", &GaussianContinuum::save_image_mat, OptionBase::buildoption,
                "Indication that a matrix corresponding to the probabilities of the points on a 2d grid should be created.\n"
                );

  declareOption(ol, "walk_on_noise", &GaussianContinuum::walk_on_noise, OptionBase::buildoption,
                "Indication that the random walk should also consider the noise variation.\n"
                );

  declareOption(ol, "upper_y", &GaussianContinuum::upper_y, OptionBase::buildoption,
                "Upper bound on the y (second) coordinate.\n"
                );
  
  declareOption(ol, "upper_x", &GaussianContinuum::upper_x, OptionBase::buildoption,
                "Lower bound on the x (first) coordinate.\n"
                );

  declareOption(ol, "lower_y", &GaussianContinuum::lower_y, OptionBase::buildoption,
                "Lower bound on the y (second) coordinate.\n"
                );
  
  declareOption(ol, "lower_x", &GaussianContinuum::lower_x, OptionBase::buildoption,
                "Lower bound on the x (first) coordinate.\n"
                );

  declareOption(ol, "points_per_dim", &GaussianContinuum::points_per_dim, OptionBase::buildoption,
                "Number of points per dimension on the grid.\n"
                );

  declareOption(ol, "parameters", &GaussianContinuum::parameters, OptionBase::learntoption,
                "Parameters of the tangent_predictor function.\n"
                );

  declareOption(ol, "Bs", &GaussianContinuum::Bs, OptionBase::learntoption,
                "The B matrices for the training set.\n"
                );

  declareOption(ol, "Fs", &GaussianContinuum::Fs, OptionBase::learntoption,
                "The F (tangent planes) matrices for the training set.\n"
                );

  declareOption(ol, "mus", &GaussianContinuum::mus, OptionBase::learntoption,
                "The mu vertors for the training set.\n"
                );

  declareOption(ol, "sms", &GaussianContinuum::sms, OptionBase::learntoption,
                "The sm values for the training set.\n"
                );
  
  declareOption(ol, "sns", &GaussianContinuum::sns, OptionBase::learntoption,
                "The sn values for the training set.\n"
                );

  declareOption(ol, "min_sigma", &GaussianContinuum::min_sigma, OptionBase::buildoption,
                "The minimum value for sigma noise and manifold.\n"
                );

  declareOption(ol, "min_diff", &GaussianContinuum::min_diff, OptionBase::buildoption,
                "The minimum value for the difference between sigma manifold and noise.\n"
                );

  declareOption(ol, "min_p_x", &GaussianContinuum::min_p_x, OptionBase::buildoption,
                "The minimum value for p_x, for stability concerns when doing gradient descent.\n"
                );

  declareOption(ol, "n_random_walk_step", &GaussianContinuum::n_random_walk_step, OptionBase::buildoption,
                "The number of random walk step.\n"
                );

  declareOption(ol, "n_random_walk_per_point", &GaussianContinuum::n_random_walk_per_point, OptionBase::buildoption,
                "The number of random walks per training set point.\n"
                );

  declareOption(ol, "noise", &GaussianContinuum::noise, OptionBase::buildoption,
                "Noise parameter for the training data.\n"
                );

  declareOption(ol, "noise_type", &GaussianContinuum::noise_type, OptionBase::buildoption,
                "Type of the noise (\"uniform\" or \"gaussian\").\n"
                );

  declareOption(ol, "use_noise", &GaussianContinuum::use_noise, OptionBase::buildoption,
                "Indication that the training should be done using noise on training data.\n"
                );

  declareOption(ol, "use_noise_direction", &GaussianContinuum::use_noise_direction, OptionBase::buildoption,
                "Indication that the noise should be directed in the noise directions.\n"
                );

  declareOption(ol, "random_walk_step_prop", &GaussianContinuum::random_walk_step_prop, OptionBase::buildoption,
                "Proportion or confidence of the random walk steps.\n"
                );

  declareOption(ol, "validation_prop", &GaussianContinuum::validation_prop, OptionBase::buildoption,
                "Proportion of points for validation set (if uncorrect value, validtion_set == train_set).\n"
                );
  
  declareOption(ol, "reference_set", &GaussianContinuum::reference_set, OptionBase::learntoption,
                "Reference points for density computation.\n"
                );
  
  


  // Now call the parent class' declareOptions
  inherited::declareOptions(ol);
}

Here is the call graph for this function:

static const PPath& PLearn::GaussianContinuum::declaringFile ( ) [inline, static]

Reimplemented from PLearn::PLearner.

Definition at line 226 of file GaussianContinuum.h.

GaussianContinuum * PLearn::GaussianContinuum::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::PLearner.

Definition at line 157 of file GaussianContinuum.cc.

void PLearn::GaussianContinuum::forget ( ) [virtual]

(Re-)initializes the PLearner in its fresh state (that state may depend on the 'seed' option) And sets 'stage' back to 0 (this is the stage of a fresh learner!).

Reimplemented from PLearn::PLearner.

Definition at line 1330 of file GaussianContinuum.cc.

References initializeParams(), PLearn::PLearner::stage, and PLearn::PLearner::train_set.

{
  if (train_set) initializeParams();
  stage = 0;
}

Here is the call graph for this function:

void PLearn::GaussianContinuum::get_image_matrix ( VMat  points,
VMat  image_points_vmat,
int  begin,
string  file_path,
int  n_near_neigh 
) [private]

Definition at line 1143 of file GaussianContinuum.cc.

References Bs, PLearn::TMat< T >::clear(), PLearn::computeNearestNeighbors(), PLearn::endl(), PLearn::exp(), Fs, image_nearest_neighbors, PLearn::VMat::length(), PLearn::log(), Log2Pi, mus, n, n_dim, neighbor_row, points_per_dim, PLearn::pownorm(), PLearn::product(), reference_set, PLearn::TMat< T >::resize(), PLearn::Object::save(), sms, sns, PLearn::substract(), t_row, PLearn::transposeProduct(), w, x_minus_neighbor, z, zm, and zn.

Referenced by train().

{
  VMat reference_set = new SubVMatrix(points,begin,0,points.length()-begin,n);
  cout << "Creating image matrix: " << file_path << endl;
  Mat image(points_per_dim,points_per_dim); image.clear();
  image_nearest_neighbors.resize(points_per_dim*points_per_dim,n_near_neigh);
  // Finding nearest neighbors

  for(int t=0; t<image_points_vmat.length(); t++)
  {
    image_points_vmat->getRow(t,t_row);
    TVec<int> nn = image_nearest_neighbors(t);
    computeNearestNeighbors(reference_set, t_row, nn);
  }

  for(int t=0; t<image_points_vmat.length(); t++)
  {
    
    image_points_vmat->getRow(t,t_row);
    real this_p_x = 0;
    // fetching nearest neighbors for density estimation
    for(int neighbor=0; neighbor<n_near_neigh; neighbor++)
    {
      points->getRow(begin+image_nearest_neighbors(t,neighbor), neighbor_row);
      substract(t_row,neighbor_row,x_minus_neighbor);
      substract(x_minus_neighbor,mus(begin+image_nearest_neighbors(t,neighbor)),z);
      product(w, Bs[begin+image_nearest_neighbors(t,neighbor)], z);
      transposeProduct(zm, Fs[begin+image_nearest_neighbors(t,neighbor)], w);
      substract(z,zm,zn);
      this_p_x += exp(-0.5*(pownorm(zm,2)/sms[begin+image_nearest_neighbors(t,neighbor)] + pownorm(zn,2)/sns[begin+image_nearest_neighbors(t,neighbor)] 
                            + n_dim*log(sms[begin+image_nearest_neighbors(t,neighbor)]) + (n-n_dim)*log(sns[begin+image_nearest_neighbors(t,neighbor)])) - n/2.0 * Log2Pi);
    }
    
    this_p_x /= reference_set.length();
    int y_coord = t/points_per_dim;
    int x_coord = t%points_per_dim;
    image(points_per_dim - y_coord - 1,x_coord) = this_p_x;
  }
  PLearn::save(file_path,image);
  
}

Here is the call graph for this function:

Here is the caller graph for this function:

real PLearn::GaussianContinuum::get_nll ( VMat  points,
VMat  image_points_vmat,
int  begin,
int  n_near_neigh 
) [private]

Definition at line 1102 of file GaussianContinuum.cc.

References Bs, PLearn::computeNearestNeighbors(), PLearn::exp(), Fs, image_nearest_neighbors, PLearn::VMat::length(), PLearn::log(), Log2Pi, mus, n, n_dim, neighbor_row, PLearn::pownorm(), PLearn::product(), reference_set, PLearn::TMat< T >::resize(), sms, sns, PLearn::substract(), t_row, PLearn::transposeProduct(), w, x_minus_neighbor, z, zm, and zn.

Referenced by train().

{
  VMat reference_set = new SubVMatrix(points,begin,0,points.length()-begin,n);
  //Mat image(points_per_dim,points_per_dim); image.clear();
  image_nearest_neighbors.resize(image_points_vmat.length(),n_near_neigh);
  // Finding nearest neighbors

  for(int t=0; t<image_points_vmat.length(); t++)
  {
    image_points_vmat->getRow(t,t_row);
    TVec<int> nn = image_nearest_neighbors(t);
    computeNearestNeighbors(reference_set, t_row, nn);
  }

  real nll = 0;

  for(int t=0; t<image_points_vmat.length(); t++)
  {
    
    image_points_vmat->getRow(t,t_row);
    real this_p_x = 0;
    // fetching nearest neighbors for density estimation
    for(int neighbor=0; neighbor<n_near_neigh; neighbor++)
    {
      points->getRow(begin+image_nearest_neighbors(t,neighbor), neighbor_row);
      substract(t_row,neighbor_row,x_minus_neighbor);
      substract(x_minus_neighbor,mus(begin+image_nearest_neighbors(t,neighbor)),z);
      product(w, Bs[begin+image_nearest_neighbors(t,neighbor)], z);
      transposeProduct(zm, Fs[begin+image_nearest_neighbors(t,neighbor)], w);
      substract(z,zm,zn);
      this_p_x += exp(-0.5*(pownorm(zm,2)/sms[begin+image_nearest_neighbors(t,neighbor)] + pownorm(zn,2)/sns[begin+image_nearest_neighbors(t,neighbor)] 
                            + n_dim*log(sms[begin+image_nearest_neighbors(t,neighbor)]) + (n-n_dim)*log(sns[begin+image_nearest_neighbors(t,neighbor)])) - n/2.0 * Log2Pi);
    }
    
    this_p_x /= reference_set.length();
    nll -= log(this_p_x);
  }

  return nll/image_points_vmat.length();
}

Here is the call graph for this function:

Here is the caller graph for this function:

OptionList & PLearn::GaussianContinuum::getOptionList ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 157 of file GaussianContinuum.cc.

OptionMap & PLearn::GaussianContinuum::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 157 of file GaussianContinuum.cc.

RemoteMethodMap & PLearn::GaussianContinuum::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 157 of file GaussianContinuum.cc.

TVec< string > PLearn::GaussianContinuum::getTestCostNames ( ) const [virtual]

Returns the names of the costs computed by computeCostsFromOutpus (and thus the test method).

Implements PLearn::PLearner.

Definition at line 1643 of file GaussianContinuum.cc.

References getTrainCostNames().

{
  return getTrainCostNames();
}

Here is the call graph for this function:

TVec< string > PLearn::GaussianContinuum::getTrainCostNames ( ) const [virtual]

Returns the names of the objective costs that the train method computes and for which it updates the VecStatsCollector train_stats.

Implements PLearn::PLearner.

Definition at line 1648 of file GaussianContinuum.cc.

Referenced by getTestCostNames().

{
  TVec<string> cost(1); cost[0] = "NLL";
  return cost;
}

Here is the caller graph for this function:

void PLearn::GaussianContinuum::initializeParams ( ) [virtual]

Definition at line 1544 of file GaussianContinuum.cc.

References architecture_type, b, c, PLearn::fill_random_uniform(), i, PLearn::PLearner::inputsize(), PLearn::Var::length(), PLearn::manual_seed(), muV, n_hidden_units, optimizer, p_x, PLERROR, PLearn::seed(), PLearn::PLearner::seed_, smb, smV, snb, snV, PLearn::sqrt(), V, and W.

Referenced by forget().

{
  if (seed_>=0)
    manual_seed(seed_);
  else
    PLearn::seed();

  if (architecture_type=="embedding_neural_network")
  {
    real delta = 1.0 / sqrt(real(inputsize()));
    fill_random_uniform(V->value, -delta, delta);
    delta = 1.0 / real(n_hidden_units);
    fill_random_uniform(W->matValue, -delta, delta);
    c->value.clear();
    fill_random_uniform(smV->matValue, -delta, delta);
    smb->value.clear();
    fill_random_uniform(smV->matValue, -delta, delta);
    snb->value.clear();
    fill_random_uniform(snV->matValue, -delta, delta);
    fill_random_uniform(muV->matValue, -delta, delta);
  }
  else if (architecture_type=="single_neural_network")
  {
    real delta = 1.0 / sqrt(real(inputsize()));
    fill_random_uniform(V->value, -delta, delta);
    delta = 1.0 / real(n_hidden_units);
    fill_random_uniform(W->matValue, -delta, delta);
    c->value.clear();
    fill_random_uniform(smV->matValue, -delta, delta);
    smb->value.clear();
    fill_random_uniform(smV->matValue, -delta, delta);
    snb->value.clear();
    fill_random_uniform(snV->matValue, -delta, delta);
    fill_random_uniform(muV->matValue, -delta, delta);
    b->value.clear();
  }
  else PLERROR("other types not handled yet!");
  
  for(int i=0; i<p_x.length(); i++)
    p_x->value[i] = 1.0/p_x.length();

  if(optimizer)
    optimizer->reset();
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::GaussianContinuum::knn ( const VMat vm,
const Vec x,
const int k,
TVec< int > &  neighbors,
bool  sortk 
) const [private]

Definition at line 900 of file GaussianContinuum.cc.

References PLearn::TMat< T >::column(), distances, dk, PLearn::Kernel::evaluate_all_i_x(), i, PLearn::VMat::length(), n, PLearn::partialSortRows(), PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), PLearn::DistanceKernel::setDataForKernelMatrix(), and t_dist.

Referenced by build_(), and computeOutput().

{
  int n = vm->length();
  distances.resize(n,2);
  distances.column(1) << Vec(0, n-1, 1); 
  dk.setDataForKernelMatrix(vm);
  t_dist.resize(n);
  dk.evaluate_all_i_x(x, t_dist);
  distances.column(0) << t_dist;
  partialSortRows(distances, k, sortk);
  neighbors.resize(k);
  for (int i = 0; i < k; i++)
    neighbors[i] = int(distances(i,1));
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::GaussianContinuum::make_random_walk ( ) [private]

Definition at line 915 of file GaussianContinuum.cc.

References i, ith_step_generated_set, j, PLearn::lapackSVD(), PLearn::TVec< T >::length(), PLearn::VMat::length(), mu, n, n_dim, n_random_walk_per_point, n_random_walk_step, PLearn::normal_sample(), output_f_all, PLERROR, random_walk_step_prop, reference_set, PLearn::TVec< T >::resize(), S_svd, sm, sn, PLearn::sqrt(), svd_threshold, t_row, tangent_plane, PLearn::PLearner::train_set, Ut_svd, V_svd, PLearn::vconcat(), walk_on_noise, and z.

Referenced by train().

{
  if(n_random_walk_step < 1) PLERROR("Number of step in random walk should be at least one");
  if(n_random_walk_per_point < 1) PLERROR("Number of random walk per training set point should be at least one");
  ith_step_generated_set.resize(n_random_walk_step);

  Mat generated_set(train_set.length()*n_random_walk_per_point,n);
  for(int t=0; t<train_set.length(); t++)
  {
    train_set->getRow(t,t_row);
    output_f_all(t_row);
      
    real this_sm = sm->value[0];
    real this_sn = sn->value[0];
    Vec this_mu(n); this_mu << mu->value;
    static Mat this_F(n_dim,n); this_F << tangent_plane->matValue;
      
    // N.B. this is the SVD of F'
    lapackSVD(this_F, Ut_svd, S_svd, V_svd);
      

    for(int rwp=0; rwp<n_random_walk_per_point; rwp++)
    {
      TVec<real> z_m(n_dim);
      TVec<real> z(n);
      for(int i=0; i<n_dim; i++)
        z_m[i] = normal_sample();
      for(int i=0; i<n; i++)
        z[i] = normal_sample();

      Vec new_point = generated_set(t*n_random_walk_per_point+rwp);
      for(int j=0; j<n; j++)
      {
        new_point[j] = 0;         
        for(int k=0; k<n_dim; k++)
          new_point[j] += Ut_svd(k,j)*z_m[k];
        new_point[j] *= sqrt(this_sm-this_sn);
        if(walk_on_noise)
          new_point[j] += z[j]*sqrt(this_sn);
      }
      new_point *= random_walk_step_prop;
      new_point += this_mu + t_row;
    }
  }

  // Test of generation of random points
  /*
  int n_test_gen_points = 3;
  int n_test_gen_generated = 30;

  Mat test_gen(n_test_gen_points*n_test_gen_generated,n);
  for(int p=0; p<n_test_gen_points; p++)
  {
    for(int t=0; t<n_test_gen_generated; t++)             
    {
      valid_set->getRow(p,t_row);
      output_f_all(t_row);
      
      real this_sm = sm->value[0];
      real this_sn = sn->value[0];
      Vec this_mu(n); this_mu << mu->value;
      static Mat this_F(n_dim,n); this_F << tangent_plane->matValue;
      
      // N.B. this is the SVD of F'
      lapackSVD(this_F, Ut_svd, S_svd, V_svd);      

      TVec<real> z_m(n_dim);
      TVec<real> z(n);
      for(int i=0; i<n_dim; i++)
        z_m[i] = normal_sample();
      for(int i=0; i<n; i++)
        z[i] = normal_sample();

      Vec new_point = test_gen(p*n_test_gen_generated+t);
      for(int j=0; j<n; j++)
      {
        new_point[j] = 0;         
        for(int k=0; k<n_dim; k++)
          new_point[j] += Ut_svd(k,j)*z_m[k];
        new_point[j] *= sqrt(this_sm-this_sn);
        if(walk_on_noise)
          new_point[j] += z[j]*sqrt(this_sn);
      }
      new_point += this_mu + t_row;
    }
  }
  
  PLearn::save("test_gen.psave",test_gen);
  */
  //PLearn::save("gen_points_0.psave",generated_set);
  ith_step_generated_set[0] = VMat(generated_set);
  
  for(int step=1; step<n_random_walk_step; step++)
  {
    Mat generated_set(ith_step_generated_set[step-1].length(),n);
    for(int t=0; t<ith_step_generated_set[step-1].length(); t++)
    {
      ith_step_generated_set[step-1]->getRow(t,t_row);
      output_f_all(t_row);
      
      real this_sm = sm->value[0];
      real this_sn = sn->value[0];
      Vec this_mu(n); this_mu << mu->value;
      static Mat this_F(n_dim,n); this_F << tangent_plane->matValue;
      
      // N.B. this is the SVD of F'
      lapackSVD(this_F, Ut_svd, S_svd, V_svd);
      
      TVec<real> z_m(n_dim);
      TVec<real> z(n);
      for(int i=0; i<n_dim; i++)
        z_m[i] = normal_sample();
      for(int i=0; i<n; i++)
        z[i] = normal_sample();
      
      Vec new_point = generated_set(t);
      for(int j=0; j<n; j++)
      {
        new_point[j] = 0;
        for(int k=0; k<n_dim; k++)
          if(S_svd[k] > svd_threshold)
            new_point[j] += Ut_svd(k,j)*z_m[k];
        new_point[j] *= sqrt(this_sm-this_sn);
        if(walk_on_noise)
          new_point[j] += z[j]*sqrt(this_sn);
      }
      new_point *= random_walk_step_prop;
      new_point += this_mu + t_row;
    
    }
    /*
    string path = " ";
    if(step == n_random_walk_step-1)
      path = "gen_points_last.psave";
    else
      path = "gen_points_" + tostring(step) + ".psave";
    
    PLearn::save(path,generated_set);
    */
    ith_step_generated_set[step] = VMat(generated_set);
  }

  reference_set = vconcat(train_set & ith_step_generated_set);

  // Single random walk
  /*
  Mat single_walk_set(100,n);
  train_set->getRow(train_set.length()-1,single_walk_set(0));
  for(int step=1; step<100; step++)
  {
    t_row << single_walk_set(step-1);
    output_f_all(t_row);
      
    real this_sm = sm->value[0];
    real this_sn = sn->value[0];
    Vec this_mu(n); this_mu << mu->value;
    static Mat this_F(n_dim,n); this_F << tangent_plane->matValue;
    
    // N.B. this is the SVD of F'
    lapackSVD(this_F, Ut_svd, S_svd, V_svd);
    
    TVec<real> z_m(n_dim);
    TVec<real> z(n);
    for(int i=0; i<n_dim; i++)
      z_m[i] = normal_sample();
    for(int i=0; i<n; i++)
      z[i] = normal_sample();
    
    Vec new_point = single_walk_set(step);
    for(int j=0; j<n; j++)
    {
      new_point[j] = 0;
      for(int k=0; k<n_dim; k++)
        if(S_svd[k] > svd_threshold)
          new_point[j] += Ut_svd(k,j)*z_m[k];
      new_point[j] *= sqrt(this_sm-this_sn);
      if(walk_on_noise)
        new_point[j] += z[j]*sqrt(this_sn);
    }
    new_point *= random_walk_step_prop;
    new_point += this_mu + t_row;
  }
  PLearn::save("image_single_rw.psave",single_walk_set);
  */
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::GaussianContinuum::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Transforms a shallow copy into a deep copy.

Reimplemented from PLearn::PLearner.

Definition at line 1267 of file GaussianContinuum.cc.

References b, Bs, c, cost_of_one_example, PLearn::deepCopyField(), dist, dk, embedding, Fs, ith_step_generated_set, PLearn::PLearner::makeDeepCopyFromShallowCopy(), min_d, min_sig, mu, mu_noisy, mus, muV, noise_var, noisy_data, optimizer, output_f, output_f_all, parameters, predictor, projection_error_f, reference_set, S_svd, sm, smb, sms, smV, sn, snb, sns, snV, sum_nll, tangent_plane, tangent_targets, tangent_targets_and_point, train_nearest_neighbors, Ut_svd, V, V_svd, validation_nearest_neighbors, PLearn::varDeepCopyField(), W, and x.

{  inherited::makeDeepCopyFromShallowCopy(copies);

  deepCopyField(cost_of_one_example, copies);
  deepCopyField(reference_set,copies);
  varDeepCopyField(x, copies);
  varDeepCopyField(noise_var, copies);  
  varDeepCopyField(b, copies);
  varDeepCopyField(W, copies);
  varDeepCopyField(c, copies);
  varDeepCopyField(V, copies);
  varDeepCopyField(tangent_targets, copies);
  varDeepCopyField(muV, copies);
  varDeepCopyField(smV, copies);
  varDeepCopyField(smb, copies);
  varDeepCopyField(snV, copies);
  varDeepCopyField(snb, copies);
  varDeepCopyField(mu, copies);
  varDeepCopyField(sm, copies);
  varDeepCopyField(sn, copies);
  varDeepCopyField(mu_noisy, copies);
  varDeepCopyField(tangent_plane, copies);
  varDeepCopyField(tangent_targets_and_point, copies);
  varDeepCopyField(sum_nll, copies);
  varDeepCopyField(min_sig, copies);
  varDeepCopyField(min_d, copies);
  varDeepCopyField(embedding, copies);

  deepCopyField(dist, copies);
  deepCopyField(ith_step_generated_set, copies);
  deepCopyField(train_nearest_neighbors, copies);
  deepCopyField(validation_nearest_neighbors, copies);
  deepCopyField(Bs, copies);
  deepCopyField(Fs, copies);
  deepCopyField(mus, copies);
  deepCopyField(sms, copies);
  deepCopyField(sns, copies);
  deepCopyField(Ut_svd, copies);
  deepCopyField(V_svd, copies);
  deepCopyField(S_svd, copies);
  deepCopyField(dk, copies);

  deepCopyField(parameters, copies);
  deepCopyField(optimizer, copies);
  deepCopyField(predictor, copies);
  deepCopyField(output_f, copies);
  deepCopyField(output_f_all, copies);
  deepCopyField(projection_error_f, copies);
  deepCopyField(noisy_data, copies);
}

Here is the call graph for this function:

int PLearn::GaussianContinuum::outputsize ( ) const [virtual]

Returns the size of this learner's output, (which typically may depend on its inputsize(), targetsize() and set options).

Implements PLearn::PLearner.

Definition at line 1319 of file GaussianContinuum.cc.

{
  return 1;
  /*
  if(output_type == "tangent_plane_variance_normalized")
    return output_f->outputsize-1;
  else
    return output_f->outputsize;
  */
}
void PLearn::GaussianContinuum::train ( ) [virtual]

The role of the train method is to bring the learner up to stage==nstages, updating the train_stats collector with training costs measured on-line in the process.

Implements PLearn::PLearner.

Definition at line 1336 of file GaussianContinuum.cc.

References batch_size, compute_cost_every_n_epochs, compute_train_and_validation_costs(), PLearn::PLearner::computeCostsOnly(), PLearn::computeNearestNeighbors(), cost_of_one_example, PLearn::endl(), get_image_matrix(), get_nll(), PLearn::hconcat(), i, image_points_mat, image_points_vmat, image_prob_mat, PLearn::PLearner::inputsize(), j, PLearn::VMat::length(), PLearn::local_neighbors_differences(), lower_x, lower_y, make_random_walk(), PLearn::meanOf(), mu, n, n_neighbors, n_neighbors_density, n_random_walk_per_point, n_random_walk_step, PLearn::norm(), PLearn::PLearner::nstages, optimizer, output_f_all, parameters, PLERROR, points_per_dim, reference_set, PLearn::PLearner::report_progress, PLearn::TVec< T >::resize(), PLearn::TMat< T >::resize(), PLearn::Object::save(), save_image_mat, sm, sn, PLearn::sqrt(), PLearn::PLearner::stage, t_row, tangent_plane, PLearn::tostring(), train_nearest_neighbors, PLearn::PLearner::train_set, PLearn::PLearner::train_stats, update_reference_set_parameters(), upper_x, upper_y, valid_set, validation_nearest_neighbors, PLearn::PLearner::verbosity, and PLearn::VMat::width().

{

  // Creation of points for matlab image matrices

  if(save_image_mat)
  {
    if(n != 2) PLERROR("In GaussianContinuum::train(): Image matrix creation is only implemented for 2d problems");
    
    real step_x = (upper_x-lower_x)/(points_per_dim-1);
    real step_y = (upper_y-lower_y)/(points_per_dim-1);
    image_points_mat.resize(points_per_dim*points_per_dim,n);
    for(int i=0; i<points_per_dim; i++)
      for(int j=0; j<points_per_dim; j++)
      {
        image_points_mat(i*points_per_dim + j,0) = lower_x + j*step_x;
        image_points_mat(i*points_per_dim + j,1) = lower_y + i*step_y;
      }

    image_points_vmat = VMat(image_points_mat);
  }

  // find nearest neighbors...

  // ... on the training set
  
  for(int t=0; t<train_set.length(); t++)
  {
    train_set->getRow(t,t_row);
    TVec<int> nn = train_nearest_neighbors(t);
    computeNearestNeighbors(train_set, t_row, nn, t);
  }
  
  // ... on the validation set
  
  for(int t=0; t<valid_set.length(); t++)
  {
    valid_set->getRow(t,t_row);
    TVec<int> nn = validation_nearest_neighbors(t);
    computeNearestNeighbors(train_set, t_row, nn);
  }

  VMat train_set_with_targets;
  VMat targets_vmat;
  if (!cost_of_one_example)
    PLERROR("GaussianContinuum::train: build has not been run after setTrainingSet!");

  targets_vmat = local_neighbors_differences(train_set, n_neighbors, false, true);

  train_set_with_targets = hconcat(train_set, targets_vmat);
  train_set_with_targets->defineSizes(inputsize()+inputsize()*n_neighbors+1+n_neighbors,0);
  int l = train_set->length();  
  //log_n_examples->value[0] = log(real(l));
  int nsamples = batch_size>0 ? batch_size : l;

  Var totalcost = meanOf(train_set_with_targets, cost_of_one_example, nsamples);

  if(optimizer)
    {
      optimizer->setToOptimize(parameters, totalcost);  
      optimizer->build();
    }
  else PLERROR("GaussianContinuum::train can't train without setting an optimizer first!");
  
  // number of optimizer stages corresponding to one learner stage (one epoch)
  int optstage_per_lstage = l/nsamples;

  PP<ProgressBar> pb;
  if(report_progress>0)
    pb = new ProgressBar("Training GaussianContinuum from stage " + tostring(stage) + " to " + tostring(nstages), nstages-stage);

  t_row.resize(train_set.width());

  int initial_stage = stage;
  bool early_stop=false;
  while(stage<nstages && !early_stop)
    {
      optimizer->nstages = optstage_per_lstage;
      train_stats->forget();
      optimizer->early_stop = false;
      optimizer->optimizeN(*train_stats);
      train_stats->finalize();
      if(verbosity>2)
        cout << "Epoch " << stage << " train objective: " << train_stats->getMean() << endl;
      ++stage;
      if(pb)
        pb->update(stage-initial_stage);
      
      if(stage != 0 && stage%compute_cost_every_n_epochs == 0)
      {
        compute_train_and_validation_costs();
      }
    }
  if(verbosity>1)
    cout << "EPOCH " << stage << " train objective: " << train_stats->getMean() << endl;

  update_reference_set_parameters();

  cout << "best train: " << get_nll(train_set,train_set,0,n_neighbors_density) << endl;
  cout << "best validation: " << get_nll(train_set,valid_set,0,n_neighbors_density) << endl;

  // test computeOutput and Costs

  real nll_train = 0;
  Vec costs(1);
  Vec target;
  for(int i=0; i<train_set.length(); i++)
  {
    train_set->getRow(i,t_row);
    computeCostsOnly(t_row,target,costs);
    nll_train += costs[0];
  }
  nll_train /= train_set.length();
  cout << "nll_train: " << nll_train << endl;
  
  /*
  int n_test_gen_points = 3;
  int n_test_gen_generated = 30;
  Mat noisy_data_set(n_test_gen_points*n_test_gen_generated,n);
  
  for(int k=0; k<n_test_gen_points; k++)
  {
    for(int t=0; t<n_test_gen_generated; t++)
    {
      valid_set->getRow(k,t_row);
      Vec noisy_point = noisy_data_set(k*n_test_gen_generated+t);
      noisy_point << noisy_data(t_row);
    }
    PLearn::save("noisy_data.psave",noisy_data_set);
  }
  */
  
  if(n==2 && save_image_mat)
  {
    Mat test_set(valid_set.length(),valid_set.width());
    Mat m_dir(valid_set.length(),n);
    Mat n_dir(valid_set.length(),n);
    for(int t=0; t<valid_set.length(); t++)
    {
      valid_set->getRow(t,t_row);
      test_set(t) << t_row;
      output_f_all(t_row);
      Vec noise_direction = n_dir(t);
      noise_direction[0] = tangent_plane->value[1];
      noise_direction[1] = -1*tangent_plane->value[0];
      Vec manifold_direction = m_dir(t);
      manifold_direction << tangent_plane->value;
      noise_direction *= sqrt(sn->value[0])/norm(noise_direction,2);
      manifold_direction *= sqrt(sm->value[0])/norm(manifold_direction,2);
    }
    PLearn::save("test_set.psave",test_set);
    PLearn::save("m_dir.psave",m_dir);
    PLearn::save("n_dir.psave",n_dir);
  }
  

  if(n_random_walk_step > 0)
  {
    make_random_walk();
    update_reference_set_parameters();
  }
  
  if(save_image_mat)
  {
    cout << "Creating image matrix" << endl;
    get_image_matrix(train_set, image_points_vmat, 0,"image.psave", n_neighbors_density);

    image_prob_mat.resize(points_per_dim,points_per_dim);
    Mat image_points(points_per_dim*points_per_dim,2);
    Mat image_mu_vectors(points_per_dim*points_per_dim,2);
    //Mat image_sigma_vectors(points_per_dim*points_per_dim,2);
    for(int t=0; t<image_points_vmat.length(); t++)
    {
      image_points_vmat->getRow(t,t_row);
     
      output_f_all(t_row);

      image_points(t,0) = t_row[0];
      image_points(t,1) = t_row[1];
      
      image_mu_vectors(t) << mu->value;
    }
    PLearn::save("image_points.psave",image_points);
    PLearn::save("image_mu_vectors.psave",image_mu_vectors);

    if(n_random_walk_step > 0)
    {
      string path = "image_rw_" + tostring(0) + ".psave";

      get_image_matrix(reference_set, image_points_vmat, 0, path, n_neighbors_density*n_random_walk_per_point);
      
      for(int i=0; i<n_random_walk_step; i++)
      {
        if(i == n_random_walk_step - 1)
          path = "image_rw_last.psave";
        else
          path = "image_rw_" + tostring(i+1) + ".psave";

        get_image_matrix(reference_set, image_points_vmat, i*train_set.length()*n_random_walk_per_point+train_set.length(),path,n_neighbors_density*n_random_walk_per_point);
      }

      cout << "NLL random walk on train: " << get_nll(reference_set,train_set,(n_random_walk_step-1)*train_set.length()*n_random_walk_per_point+train_set.length(),n_neighbors_density*n_random_walk_per_point) << endl;
      cout << "NLL random walk on validation: " << get_nll(reference_set,valid_set,(n_random_walk_step-1)*train_set.length()*n_random_walk_per_point+train_set.length(),n_neighbors_density*n_random_walk_per_point) << endl;
    }
  }

}

Here is the call graph for this function:

void PLearn::GaussianContinuum::update_reference_set_parameters ( ) [private]

Definition at line 857 of file GaussianContinuum.cc.

References Bs, PLearn::TVec< T >::clear(), Fs, i, j, PLearn::lapackSVD(), PLearn::Var::length(), PLearn::TVec< T >::length(), PLearn::VMat::length(), mus, n, n_dim, predictor, reference_set, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), S_svd, sms, sns, PLearn::TVec< T >::subVec(), svd_threshold, t_row, tangent_plane, Ut_svd, V_svd, PLearn::VMat::width(), and PLearn::Var::width().

Referenced by compute_train_and_validation_costs(), and train().

{
    // Compute Fs, Bs, mus, sms, sns
  Bs.resize(reference_set.length());
  Fs.resize(reference_set.length());
  mus.resize(reference_set.length(), n);
  sms.resize(reference_set.length());
  sns.resize(reference_set.length());
  
  for(int t=0; t<reference_set.length(); t++)
  {
    Fs[t].resize(tangent_plane.length(), tangent_plane.width());
    reference_set->getRow(t,t_row);
    predictor->fprop(t_row, Fs[t].toVec() & mus(t) & sms.subVec(t,1) & sns.subVec(t,1));
    
    // computing B

    static Mat F_copy;
    F_copy.resize(Fs[t].length(),Fs[t].width());
    F_copy << Fs[t];
    // N.B. this is the SVD of F'
    lapackSVD(F_copy, Ut_svd, S_svd, V_svd);
    Bs[t].resize(n_dim,reference_set.width());
    Bs[t].clear();
    for (int k=0;k<S_svd.length();k++)
    {
      real s_k = S_svd[k];
      if (s_k>svd_threshold) // ignore the components that have too small singular value (more robust solution)
      { 
        real coef = 1/s_k;
        for (int i=0;i<n_dim;i++)
        {
          real* Bi = Bs[t][i];
          for (int j=0;j<n;j++)
            Bi[j] += V_svd(i,k)*Ut_svd(k,j)*coef;
        }
      }
    }
    
  }

}

Here is the call graph for this function:

Here is the caller graph for this function:


Member Data Documentation

Reimplemented from PLearn::PLearner.

Definition at line 226 of file GaussianContinuum.h.

Definition at line 164 of file GaussianContinuum.h.

Referenced by build_(), declareOptions(), and initializeParams().

Definition at line 69 of file GaussianContinuum.h.

Referenced by build_(), initializeParams(), and makeDeepCopyFromShallowCopy().

Definition at line 168 of file GaussianContinuum.h.

Referenced by declareOptions(), and train().

Definition at line 106 of file GaussianContinuum.h.

Referenced by build_().

Definition at line 69 of file GaussianContinuum.h.

Referenced by build_(), initializeParams(), and makeDeepCopyFromShallowCopy().

Definition at line 152 of file GaussianContinuum.h.

Referenced by declareOptions(), and train().

Definition at line 66 of file GaussianContinuum.h.

Referenced by build_(), makeDeepCopyFromShallowCopy(), and train().

Definition at line 79 of file GaussianContinuum.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 102 of file GaussianContinuum.h.

Referenced by knn().

Definition at line 104 of file GaussianContinuum.h.

Referenced by knn(), and makeDeepCopyFromShallowCopy().

Definition at line 156 of file GaussianContinuum.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 137 of file GaussianContinuum.h.

Referenced by get_image_matrix(), and get_nll().

Definition at line 135 of file GaussianContinuum.h.

Referenced by train().

Definition at line 134 of file GaussianContinuum.h.

Referenced by train().

Definition at line 136 of file GaussianContinuum.h.

Referenced by train().

Definition at line 124 of file GaussianContinuum.h.

Referenced by build_(), and declareOptions().

Definition at line 84 of file GaussianContinuum.h.

Referenced by make_random_walk(), and makeDeepCopyFromShallowCopy().

Definition at line 141 of file GaussianContinuum.h.

Referenced by declareOptions(), and train().

Definition at line 139 of file GaussianContinuum.h.

Referenced by declareOptions(), and train().

Definition at line 77 of file GaussianContinuum.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 144 of file GaussianContinuum.h.

Referenced by build_(), and declareOptions().

Definition at line 145 of file GaussianContinuum.h.

Referenced by build_(), and declareOptions().

Definition at line 77 of file GaussianContinuum.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 143 of file GaussianContinuum.h.

Referenced by build_(), and declareOptions().

Definition at line 150 of file GaussianContinuum.h.

Referenced by build_(), and declareOptions().

Definition at line 74 of file GaussianContinuum.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 69 of file GaussianContinuum.h.

Referenced by build_(), initializeParams(), and makeDeepCopyFromShallowCopy().

Definition at line 166 of file GaussianContinuum.h.

Referenced by build_(), declareOptions(), and initializeParams().

Definition at line 148 of file GaussianContinuum.h.

Referenced by build_(), declareOptions(), and train().

Definition at line 131 of file GaussianContinuum.h.

Referenced by declareOptions(), make_random_walk(), and train().

Definition at line 130 of file GaussianContinuum.h.

Referenced by declareOptions(), make_random_walk(), and train().

Definition at line 75 of file GaussianContinuum.h.

Definition at line 128 of file GaussianContinuum.h.

Referenced by build_(), and declareOptions().

Definition at line 129 of file GaussianContinuum.h.

Referenced by build_(), and declareOptions().

Definition at line 68 of file GaussianContinuum.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 161 of file GaussianContinuum.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 170 of file GaussianContinuum.h.

Definition at line 157 of file GaussianContinuum.h.

Referenced by makeDeepCopyFromShallowCopy().

Definition at line 165 of file GaussianContinuum.h.

Definition at line 75 of file GaussianContinuum.h.

Referenced by build_().

Definition at line 75 of file GaussianContinuum.h.

Referenced by build_().

Definition at line 75 of file GaussianContinuum.h.

Referenced by build_().

Definition at line 113 of file GaussianContinuum.h.

Referenced by build_(), declareOptions(), makeDeepCopyFromShallowCopy(), and train().

Definition at line 142 of file GaussianContinuum.h.

Referenced by declareOptions(), get_image_matrix(), and train().

Definition at line 146 of file GaussianContinuum.h.

Referenced by compute_train_and_validation_costs(), and declareOptions().

Definition at line 160 of file GaussianContinuum.h.

Referenced by makeDeepCopyFromShallowCopy().

Definition at line 125 of file GaussianContinuum.h.

Referenced by declareOptions(), and make_random_walk().

Definition at line 132 of file GaussianContinuum.h.

Referenced by declareOptions(), and train().

Definition at line 147 of file GaussianContinuum.h.

Referenced by build_(), and declareOptions().

Definition at line 69 of file GaussianContinuum.h.

Referenced by build_(), initializeParams(), and makeDeepCopyFromShallowCopy().

Definition at line 69 of file GaussianContinuum.h.

Referenced by build_(), initializeParams(), and makeDeepCopyFromShallowCopy().

Definition at line 69 of file GaussianContinuum.h.

Referenced by build_(), initializeParams(), and makeDeepCopyFromShallowCopy().

Definition at line 69 of file GaussianContinuum.h.

Referenced by build_(), initializeParams(), and makeDeepCopyFromShallowCopy().

Definition at line 76 of file GaussianContinuum.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Vec PLearn::GaussianContinuum::t_dist [mutable, protected]

Definition at line 101 of file GaussianContinuum.h.

Referenced by knn().

TVec<int> PLearn::GaussianContinuum::t_nn [mutable, protected]

Definition at line 100 of file GaussianContinuum.h.

Referenced by computeOutput().

Vec PLearn::GaussianContinuum::t_row [mutable, protected]

Definition at line 72 of file GaussianContinuum.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 72 of file GaussianContinuum.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 75 of file GaussianContinuum.h.

Referenced by build_().

Definition at line 87 of file GaussianContinuum.h.

Definition at line 140 of file GaussianContinuum.h.

Referenced by declareOptions(), and train().

Definition at line 138 of file GaussianContinuum.h.

Referenced by declareOptions(), and train().

Definition at line 126 of file GaussianContinuum.h.

Referenced by build_(), and declareOptions().

Definition at line 127 of file GaussianContinuum.h.

Referenced by build_(), and declareOptions().

Definition at line 69 of file GaussianContinuum.h.

Referenced by build_(), initializeParams(), and makeDeepCopyFromShallowCopy().

Definition at line 81 of file GaussianContinuum.h.

Referenced by build_(), compute_train_and_validation_costs(), and train().

Definition at line 154 of file GaussianContinuum.h.

Referenced by build_(), and declareOptions().

Definition at line 153 of file GaussianContinuum.h.

Referenced by build_(), and declareOptions().

Vec PLearn::GaussianContinuum::w [mutable, protected]

Definition at line 69 of file GaussianContinuum.h.

Referenced by build_(), initializeParams(), and makeDeepCopyFromShallowCopy().

Definition at line 133 of file GaussianContinuum.h.

Referenced by declareOptions(), and make_random_walk().

Definition at line 123 of file GaussianContinuum.h.

Referenced by build_(), and declareOptions().

Vec PLearn::GaussianContinuum::z [mutable, protected]
Vec PLearn::GaussianContinuum::zm [mutable, protected]
Vec PLearn::GaussianContinuum::zn [mutable, protected]

The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines