PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Static Protected Member Functions | Protected Attributes | Private Types | Private Member Functions
PLearn::TransformationLearner Class Reference

#include <TransformationLearner.h>

Inheritance diagram for PLearn::TransformationLearner:
Inheritance graph
[legend]
Collaboration diagram for PLearn::TransformationLearner:
Collaboration graph
[legend]

List of all members.

Public Member Functions

 TransformationLearner ()
 Default constructor.
virtual real log_density (const Vec &y) const
 Return log of probability density log(p(y | x)).
virtual void generate (Vec &y) const
 Return a pseudo-random sample generated from the conditional distribution, of density p(y | x).
virtual int inputsize () const
 Generates a pseudo-random sample x from the reversed conditional distribution, of density p(x | y) (and NOT p(y | x)).
virtual void forget ()
 (Re-)initializes the PDistribution in its fresh state (that state may depend on the 'seed' option).
virtual void train ()
 The role of the train method is to bring the learner up to stage == nstages, updating the train_stats collector with training costs measured on-line in the process.
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual TransformationLearnerdeepCopy (CopiesMap &copies) const
void initTransformsParameters ()
 INITIAL VALUES OF THE PARAMETERS TO LEARN.
void setTransformsParameters (TVec< Mat > transforms, Mat bias=Mat())
 initializes the transformation parameters to the given values (bias are set to 0)
void initNoiseVariance ()
 initializes the noise variance randomly (gamma distribution)
void setNoiseVariance (real nv)
 initializes the noise variance with the given value
void initTransformDistribution ()
 initializes the transformation distribution randomly (dirichlet distribution)
void setTransformDistribution (Vec td)
 initializes the transformation distribution with the given values
void generatePredictedFrom (const Vec &source, Vec &sample) const
 GENERATION FUNCTIONS.
void generatePredictedFrom (const Vec &source, Vec &sample, int transformIdx) const
 generates a sample data point from a source data point with a specific transformation
Vec returnPredictedFrom (Vec source, int transformIdx=-1) const
 generates a sample data point from a source data point and returns it (if transformIdx >= 0 , we use the corresponding transformation )
void batchGeneratePredictedFrom (const Vec &center, Mat &samples) const
 fill the matrix "samples" with data points obtained from a given center data point
void batchGeneratePredictedFrom (const Vec &center, Mat &samples, int transformIdx) const
 fill the matrix "samples" with data points obtained form a given center data point
Mat returnGeneratedSamplesFrom (Vec center, int n, int transformIdx=-1) const
int pickTransformIdx () const
 select a transformation randomly (with respect to our multinomial distribution)
int pickNeighborIdx () const
 Select a neighbor in the training set randomly (return his index in the training set) We suppose all data points in the training set are equiprobables.
void treeDataSet (const Vec &root, int deepness, int branchingFactor, Mat &dataPoints, int transformIdx=-1) const
 creates a data set: equivalent in building a tree with fixed deepness and constant branching factor
Mat returnTreeDataSet (Vec root, int deepness, int branchingFactor, int transformIdx=-1) const
void sequenceDataSet (const Vec &start, int n, Mat &dataPoints, int transformIdx=-1) const
 create a "sequential" dataset: start -> first point -> second point ...
Mat returnSequenceDataSet (Vec start, int n, int transformIdx=-1) const
Vec returnTrainingPoint (int idx) const
 COPIES OF THE STRUCTURES.
TVec< ReconstructionCandidatereturnReconstructionCandidates (int targetIdx) const
 returns all the reconstructions candidates associated to a given target
Mat returnReconstructions (int targetIdx) const
 returns the reconstructions of the "targetIdx"th data point value in the training set (one reconstruction for each reconstruction candidate)
Mat returnNeighbors (int targetIdx) const
 returns the neighbors choosen to reconstruct the target (one choosen neighbor for each reconstruction candidate associated to the target)
Mat returnTransform (int transformIdx) const
 returns the parameters of the "transformIdx"th transformation
Mat returnAllTransforms () const
 returns the parameters of each transformation (as an KdXd matrix, K = number of transformations, d = dimension of input space)
virtual void build ()
 Simply calls inherited::build() then build_().
void mainLearnerBuild ()
 main initialization operations that have to be done before any training phase
void buildLearnedParameters ()
void generatorBuild (int inputSpaceDim_=2, TVec< Mat > transforms_=TVec< Mat >(), Mat biasSet_=Mat(), real noiseVariance_=-1.0, Vec transformDistribution_=Vec())
 initialization operations that have to be done before a generation process (all the undefined parameters will be initialized randomly)
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.

Static Public Member Functions

static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

int behavior
 A transformation learner might behave as a learner,as well as a generator.
real minimumProba
 The following variable will be used to ensure p(x,v,t )>0 at the beginning (see implantation of randomReconstuctionWeight() for more details)
int transformFamily
 what is the global form of the transformation functions used?
bool withBias
 add a bias to the transformation function ?
bool learnNoiseVariance
 is the variance(precision) of the noise random variable learned or fixed ? (recall that the precision = 1/variance)
bool regOnNoiseVariance
 if we learn the noise variance, do we use the MAP estimator ?
bool learnTransformDistribution
 is the transformation distribution learned or fixed?
bool regOnTransformDistribution
 if we learn the transformation distribution, do we use the MAP estimator ?
bool emphasisOnDiversity
 set to True, it modifies the way the transformation parameters are learned A term which represents diversity among transformations is added to the function to optimize : div_factor*sum(||theta_i - theta_j ||^2) The transformations can no more be updated all the same time We will need to define periods and offsets to know when to update them.
real diversityFactor
int initializationMode
 how the initial values of the parameters to learn are choosen?
int largeEStepAPeriod
 For a given training point, we do not consider all the possibilities for the hidden variables.
int largeEStepAOffset
int largeEStepBPeriod
int largeEStepBOffset
int noiseVariancePeriod
 If the noise variance (precision) is learned, the following variables tells us when to update the noise variance in the maximization steps: (see MStep() for more details)
int noiseVarianceOffset
real noiseAlpha
 These 2 parameters have to be defined if the noise variance is learned using a MAP procedure.
real noiseBeta
int transformDistributionPeriod
 If the transformation distribution is learned, the following variables tells us when to update it in the maximization steps: (see MStep() for more details)
int transformDistributionOffset
real transformDistributionAlpha
 This parameter have to be defined if the transformation distribution is learned using a MAP procedure.
int transformsPeriod
 tells us when to update the transformation parameters
int transformsOffset
int biasPeriod
int biasOffset
real noiseVariance
 variance of the NOISE random variable.
real transformsVariance
 variance on the transformation parameters (prior distribution = normal with mean 0)
int nbTransforms
 number of transformations
int nbNeighbors
 number of neighbors
Vec transformDistribution
 multinomial distribution for the transformation: (i.e.

Static Public Attributes

static StaticInitializer _static_initializer_

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares the class options.
static void declareMethods (RemoteMethodMap &rmm)
 Declares the methods that are remote-callable.

Protected Attributes

Mat transformsSet
 set of transformations: mdxd matrix : -m = number of transformation,
TVec< Mattransforms
Mat biasSet
 views on sub-matrices of the matrix transformsSet
TVec< ReconstructionCandidatereconstructionSet
 a reconstruction set:
int inputSpaceDim
 dimension of the input space
int nbTargetReconstructions
 number of hidden variables combinations keeped for a specific target in the reconstruction set.
int nbReconstructions
 total number of combinations (x,v,t) keeped in the reconstruction set
int trainingSetLength
 number of samples given in the training set
real transformsSD
 standard deviations for the transformation parameters:
TVec< ReconstructionCandidatetargetReconstructionSet
 Will be used to store a view on the reconstructionSet.
Mat B_C
 Storage space that will be used in the maximization step, in transformation parameters updating process.
TVec< MatB
 Vectors of matrices that will be used in transformations parameters updating process.
TVec< MatC
Vec newDistribution
Vec ses_target
Vec ses_neighbor
Vec ses_predictedTarget
Vec lg_neighbor
Vec lg_predictedTarget
Vec stp_v
real stp_w
Vec fnn_target
Vec fnn_neighbor
Vec fbtrc_target
Vec fbtrc_neighbor
Vec fbtrc_predictedTarget
Vec fbwn_target
Vec fbwn_neighbor
Vec fbwn_predictedTarget
Vec mst_v
Vec mst_target
Vec mst_neighbor
TVec< intmst_pivots
Mat msb_newBiasSet
Vec msb_norms
Vec msb_target
Vec msb_neighbor
Vec msb_reconstruction
Vec msnvMAP_total_k
Vec msnvMAP_target
Vec msnvMAP_neighbor
Vec msnvMAP_reconstruction
Mat mstd_B
Mat mstd_C
Mat mstd_D
Vec mstd_v
Vec mstd_target
Vec mstd_neighbor
TVec< intmstd_pivots

Private Types

typedef PDistribution inherited

Private Member Functions

void build_ ()
 This does the actual building.
void seeTargetReconstructionSet (int targetIdx, TVec< ReconstructionCandidate > &targetReconstructionSet) const
 VIEWS ON RECONSTRUCTION SET AND TRAINING SET.
void seeTrainingPoint (const int idx, Vec &dst) const
 stores the "idx"th training data point into the variable 'dst'
real gamma_sample (real alpha, real beta=1) const
 GENERATE GAMMA RANDOM VARIABLES.
void dirichlet_sample (real alpha, Vec &sample) const
 GENERATE DIRICHLET RANDOM VARIABLES source of the algorithm: WIKIPEDIA.
Vec return_dirichlet_sample (real alpha) const
void normalizeTargetWeights (int targetIdx, real totalWeight)
 OPERATIONS ON WEIGHTS.
real randomWeight () const
 returns a random weight
real INIT_weight (real initValue) const
 arithmetic operations on reconstruction weights
real PROBA_weight (real weight) const
 CONSTRUCTOR.
real DIV_weights (real numWeight, real denomWeight) const
 GET CORRESPONDING PROBABILITY.
real MULT_INVERSE_weight (real weight) const
 DIVISION.
real MULT_weights (real weight1, real weight2) const
 MULTIPLICATIVE INVERSE.
real SUM_weights (real weight1, real weight2) const
 MULTIPLICATION.
real updateReconstructionWeight (int candidateIdx)
 SUM.
real updateReconstructionWeight (int candidateIdx, const Vec &target, const Vec &neighbor, int transformIdx, Vec &predictedTarget)
 NOT A USER METHOD !
real computeReconstructionWeight (const ReconstructionCandidate &gc) const
real computeReconstructionWeight (int targetIdx, int neighborIdx, int transformIdx) const
real computeReconstructionWeight (const Vec &target, int neighborIdx, int transformIdx) const
real computeReconstructionWeight (const Vec &target, const Vec &neighbor, int transformIdx) const
real computeReconstructionWeight (const Vec &target, const Vec &neighbor, int transformIdx, Vec &predictedTarget) const
void applyTransformationOn (int transformIdx, const Vec &src, Vec &dst) const
 applies "transformIdx"th transformation on data point "src"
bool isWellDefined (Vec &distribution) const
 verify if the multinomial distribution given is well-defined i.e.
void initEStep ()
 INITIAL E STEP.
void initEStepA ()
 initialization of the reconstruction set, version A
void initEStepB ()
 initialization of the reconstruction set, version B
real expandTargetNeighborPairInReconstructionSet (int targetIdx, int neighborIdx, int candidateStartIdx)
 auxialiary function of "initEStep" .
void findNearestNeighbors (int targetIdx, priority_queue< pair< real, int > > &pq)
 auxiliary function of initEStep stores the nearest neighbors for a given target point in a priority queue.
void EStep ()
 E STEP.
void largeEStepA ()
 LARGE E STEP : VERSION A (expectation step)
void findBestTargetReconstructionCandidates (int targetIdx, priority_queue< ReconstructionCandidate > &pq)
 auxiliary function of largeEStepA() for a given target, stores the km most probable (neighbors, transformation) pairs in a priority queue (k = nb neighbors, m = nb transformations)
void largeEStepB ()
 LARGE E STEP : VERSION B (expectation step)
void findBestWeightedNeighbors (int targetIdx, int transformIdx, priority_queue< ReconstructionCandidate > &pq)
 auxiliary function of largeEStepB() for a given target x and a given transformation t , stores the best weighted triples (x, neighbor, t) in a priority queue .
void smallEStep ()
 SMALL E STEP (expectation step)
void MStep ()
 M STEP.
void MStepTransformDistribution ()
 maximization step with respect to transformation distribution parameters
void MStepTransformDistributionMAP (real alpha)
 maximization step with respect to transformation distribution parameters (MAP version, alpha = dirichlet prior distribution parameter) NOTE : alpha =1 -> no regularization
void MStepTransformations ()
 maximization step with respect to transformation matrices (MAP version)
void MStepTransformationDiv (int transformIdx)
 maximization step with respect to a specific transformation matrix
void MStepBias ()
 maximization step with respect to transformation bias (MAP version)
void MStepNoiseVariance ()
 maximization step with respect to noise variance
void MStepNoiseVarianceMAP (real alpha, real beta)
 maximization step with respect to noise variance (MAP version, alpha and beta = gamma prior distribution parameters) NOTE : alpha=1, beta=0 -> no regularization
real reconstructionEuclideanDistance (int candidateIdx)
 returns the distance between the reconstruction and the target for the 'candidateIdx'th reconstruction candidate
real reconstructionEuclideanDistance (const Vec &target, const Vec &neighbor, int transformIdx, Vec &reconstruction) const
void nextStage ()
 increments the variable 'stage' of 1

Detailed Description

Definition at line 176 of file TransformationLearner.h.


Member Typedef Documentation

Reimplemented from PLearn::PDistribution.

Definition at line 178 of file TransformationLearner.h.


Constructor & Destructor Documentation

PLearn::TransformationLearner::TransformationLearner ( )

Member Function Documentation

string PLearn::TransformationLearner::_classname_ ( ) [static]

Reimplemented from PLearn::PDistribution.

Definition at line 50 of file TransformationLearner.cc.

OptionList & PLearn::TransformationLearner::_getOptionList_ ( ) [static]

Reimplemented from PLearn::PDistribution.

Definition at line 50 of file TransformationLearner.cc.

RemoteMethodMap & PLearn::TransformationLearner::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::PDistribution.

Definition at line 50 of file TransformationLearner.cc.

bool PLearn::TransformationLearner::_isa_ ( const Object o) [static]

Reimplemented from PLearn::PDistribution.

Definition at line 50 of file TransformationLearner.cc.

Object * PLearn::TransformationLearner::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::PDistribution.

Definition at line 50 of file TransformationLearner.cc.

StaticInitializer TransformationLearner::_static_initializer_ & PLearn::TransformationLearner::_static_initialize_ ( ) [static]

Reimplemented from PLearn::PDistribution.

Definition at line 50 of file TransformationLearner.cc.

void PLearn::TransformationLearner::applyTransformationOn ( int  transformIdx,
const Vec src,
Vec dst 
) const [inline, private]

applies "transformIdx"th transformation on data point "src"

Definition at line 1706 of file TransformationLearner.cc.

References biasSet, m, PLearn::product(), TRANSFORM_FAMILY_LINEAR, transformFamily, transforms, and withBias.

Referenced by computeReconstructionWeight(), generatePredictedFrom(), MStepBias(), reconstructionEuclideanDistance(), and returnReconstructions().

{
    if(transformFamily==TRANSFORM_FAMILY_LINEAR){
        Mat m  = transforms[transformIdx];
        //transposeProduct(dst,m,src); 
        product(dst,m,src);
        if(withBias){
            dst += biasSet(transformIdx);
        }
    }
    else{ //transformFamily == TRANSFORM_FAMILY_LINEAR_INCREMENT
        Mat m = transforms[transformIdx];
        //transposeProduct(dst,m,src);
        product(dst,m,src);
        dst += src;
        if(withBias){
            dst += biasSet(transformIdx);
        }
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::batchGeneratePredictedFrom ( const Vec center,
Mat samples 
) const

fill the matrix "samples" with data points obtained from a given center data point

Definition at line 1206 of file TransformationLearner.cc.

References generatePredictedFrom(), i, inputSpaceDim, PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), PLASSERT, and PLearn::TMat< T >::width().

Referenced by returnGeneratedSamplesFrom(), and treeDataSet().

{
    PLASSERT(center.length() ==inputSpaceDim);
    PLASSERT(samples.width() ==inputSpaceDim);
    int l = samples.length();
    for(int i=0; i<l; i++)
    {
        Vec v = samples(i);
        generatePredictedFrom(center, v);
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::batchGeneratePredictedFrom ( const Vec center,
Mat samples,
int  transformIdx 
) const

fill the matrix "samples" with data points obtained form a given center data point

  • we use a specific transformation

Definition at line 1221 of file TransformationLearner.cc.

References generatePredictedFrom(), i, inputSpaceDim, PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), PLASSERT, and PLearn::TMat< T >::width().

{
    PLASSERT(center.length() ==inputSpaceDim);
    PLASSERT(samples.width() ==inputSpaceDim);
    int l = samples.length();
    for(int i=0; i<l; i++)
    {
        Vec v = samples(i);
        generatePredictedFrom(center, v,transformIdx);
    }  
}

Here is the call graph for this function:

void PLearn::TransformationLearner::build ( ) [virtual]

Simply calls inherited::build() then build_().

Reimplemented from PLearn::PDistribution.

Definition at line 533 of file TransformationLearner.cc.

References PLearn::PDistribution::build(), and build_().

Referenced by forget().

{

    // ### Nothing to add here, simply calls build_().
    inherited::build();
    build_();
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::PDistribution.

Definition at line 544 of file TransformationLearner.cc.

References behavior, BEHAVIOR_LEARNER, generatorBuild(), PLearn::PP< T >::isNotNull(), mainLearnerBuild(), and PLearn::PLearner::train_set.

Referenced by build().

{
    // ### This method should do the real building of the object,
    // ### according to set 'options', in *any* situation.
    // ### Typical situations include:
    // ###  - Initial building of an object from a few user-specified options
    // ###  - Building of a "reloaded" object: i.e. from the complete set of
    // ###    all serialised options.
    // ###  - Updating or "re-building" of an object after a few "tuning"
    // ###    options have been modified.
    // ### You should assume that the parent class' build_() has already been
    // ### called.

    // ### In general, you will want to call this class' specific methods for
    // ### conditional distributions.
    // TransformationLearner::setPredictorPredictedSizes(predictor_size,
    //                                          predicted_size,
    //                                          false);
    // TransformationLearner::setPredictor(predictor_part, false);

 

    if(behavior == BEHAVIOR_LEARNER)
    {
        if(train_set.isNotNull())
        {
            mainLearnerBuild();
        }
     
    }
   
    else{
        generatorBuild(); //initialization of the parameters with all the default values
    }
        
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::buildLearnedParameters ( )

Definition at line 746 of file TransformationLearner.cc.

References biasSet, INIT_weight(), initNoiseVariance(), initTransformDistribution(), initTransformsParameters(), inputSpaceDim, isWellDefined(), learnNoiseVariance, learnTransformDistribution, PLearn::TVec< T >::length(), nbReconstructions, nbTransforms, noiseVariance, PLASSERT, reconstructionSet, regOnNoiseVariance, regOnTransformDistribution, PLearn::TVec< T >::resize(), PLearn::TMat< T >::subMatRows(), transformDistribution, transforms, transformsSet, UNDEFINED, w, and withBias.

Referenced by declareMethods(), and train().

                                                  {
    
    //LEARNED PARAMETERS


    //set of transformations matrices
    transformsSet = Mat(nbTransforms * inputSpaceDim, inputSpaceDim);
    
    //view on the set of transformations (vector)
    //    each transformation = one matrix 
    transforms.resize(nbTransforms);
    for(int k = 0; k< nbTransforms; k++){
        transforms[k] = transformsSet.subMatRows(k * inputSpaceDim, inputSpaceDim);       
    }
    
    //set of transformations bias (optional)
    if(withBias){
        biasSet = Mat(nbTransforms,inputSpaceDim);       
    }
    else{
        biasSet = Mat(0,0);   
    }

    //choose an initial value for each transformation parameter  (normal distribution) 
    initTransformsParameters();

    //initialize the noise variance
    if(noiseVariance == UNDEFINED){
        if(learnNoiseVariance && regOnNoiseVariance){
            initNoiseVariance();
        }
        else{
            noiseVariance = 1.0;
        }
    }

    //transformDistribution
    if(transformDistribution.length() == 0){
        if(learnTransformDistribution && regOnTransformDistribution)
            initTransformDistribution();
        else{
            transformDistribution.resize(nbTransforms);
            real w = INIT_weight(1.0/nbTransforms);
            for(int k=0; k<nbTransforms ; k++){
                transformDistribution[k] = w;
            }
        }       
    }
    else{
        PLASSERT(transformDistribution.length() == nbTransforms);
        PLASSERT(isWellDefined(transformDistribution));
    }


     //reconstruction set 
    reconstructionSet.resize(nbReconstructions);


}

Here is the call graph for this function:

Here is the caller graph for this function:

string PLearn::TransformationLearner::classname ( ) const [virtual]

Reimplemented from PLearn::PDistribution.

Definition at line 50 of file TransformationLearner.cc.

real PLearn::TransformationLearner::computeReconstructionWeight ( int  targetIdx,
int  neighborIdx,
int  transformIdx 
) const [inline, private]

Definition at line 1665 of file TransformationLearner.cc.

References computeReconstructionWeight(), inputSpaceDim, and seeTrainingPoint().

{

    Vec target(inputSpaceDim);
    seeTrainingPoint(targetIdx,target);
    return computeReconstructionWeight(target,
                                       neighborIdx,
                                       transformIdx);
}

Here is the call graph for this function:

real PLearn::TransformationLearner::computeReconstructionWeight ( const Vec target,
int  neighborIdx,
int  transformIdx 
) const [inline, private]

Definition at line 1676 of file TransformationLearner.cc.

References computeReconstructionWeight(), inputSpaceDim, and seeTrainingPoint().

{
    Vec neighbor(inputSpaceDim);
    seeTrainingPoint(neighborIdx, neighbor);
    return computeReconstructionWeight(target,neighbor,transformIdx);
}

Here is the call graph for this function:

real PLearn::TransformationLearner::computeReconstructionWeight ( const ReconstructionCandidate gc) const [inline, private]
real PLearn::TransformationLearner::computeReconstructionWeight ( const Vec target,
const Vec neighbor,
int  transformIdx,
Vec predictedTarget 
) const [inline, private]

Definition at line 1693 of file TransformationLearner.cc.

References applyTransformationOn(), MULT_weights(), noiseVariance, PLearn::powdistance(), transformDistribution, and w.

{
    applyTransformationOn(transformIdx, neighbor, predictedTarget);
    real factor = -1/(2*noiseVariance);
    real w = factor*powdistance(target, predictedTarget);
    return MULT_weights(w, transformDistribution[transformIdx]);      
}

Here is the call graph for this function:

real PLearn::TransformationLearner::computeReconstructionWeight ( const Vec target,
const Vec neighbor,
int  transformIdx 
) const [inline, private]

Definition at line 1685 of file TransformationLearner.cc.

References computeReconstructionWeight(), and inputSpaceDim.

{
    Vec predictedTarget(inputSpaceDim);
    return computeReconstructionWeight(target, neighbor, transformIdx,predictedTarget);
}

Here is the call graph for this function:

void PLearn::TransformationLearner::declareMethods ( RemoteMethodMap rmm) [static, protected]

Declares the methods that are remote-callable.

Reimplemented from PLearn::PDistribution.

Definition at line 318 of file TransformationLearner.cc.

References PLearn::PDistribution::_getRemoteMethodMap_(), buildLearnedParameters(), PLearn::declareMethod(), EStep(), gamma_sample(), generatorBuild(), PLearn::RemoteMethodMap::inherited(), initEStep(), initNoiseVariance(), initTransformDistribution(), initTransformsParameters(), largeEStepA(), largeEStepB(), MStep(), MStepBias(), MStepNoiseVariance(), MStepTransformationDiv(), MStepTransformations(), MStepTransformDistribution(), nextStage(), pickNeighborIdx(), pickTransformIdx(), return_dirichlet_sample(), returnAllTransforms(), returnGeneratedSamplesFrom(), returnNeighbors(), returnPredictedFrom(), returnReconstructionCandidates(), returnReconstructions(), returnSequenceDataSet(), returnTrainingPoint(), returnTransform(), returnTreeDataSet(), setNoiseVariance(), setTransformDistribution(), setTransformsParameters(), and smallEStep().

                                                              {



    rmm.inherited(inherited::_getRemoteMethodMap_());
    
    declareMethod(rmm, 
                  "initTransformsParameters",
                  &TransformationLearner::initTransformsParameters,
                  (BodyDoc("initializes the transformation parameters randomly \n"
                           "  (all parameters are a priori independent and normally distributed)")));
   
    declareMethod(rmm, 
                  "setTransformsParameters",
                  &TransformationLearner::setTransformsParameters,
                  (BodyDoc("initializes the transformation parameters with the given values"),
                   ArgDoc("TVec<Mat> transforms", "initial transformation matrices"),
                   ArgDoc("Mat  biasSet","initial bias (one by transformation) (optional)")));
    declareMethod(rmm, 
                  "initNoiseVariance",
                  &TransformationLearner::initNoiseVariance,
                  (BodyDoc("initializes the noise variance randomly (gamma distribution)")));
    declareMethod(rmm, 
                  "setNoiseVariance",
                  &TransformationLearner::setNoiseVariance,
                  (BodyDoc("initializes the noise variance to the given value"),
                   ArgDoc("real nv","noise variance")));
    declareMethod(rmm, 
                  "initTransformDistribution",
                  &TransformationLearner::initTransformDistribution,
                  (BodyDoc("initializes the transformation distribution randomly \n"
                           "-we use a dirichlet distribution \n"
                           "-we store log-probabilities instead probabilities")));
    declareMethod(rmm, 
                  "setTransformDistribution",
                  &TransformationLearner::setTransformDistribution,
                  (BodyDoc("initializes the transformation distribution with the given values \n"
                           " -the given values might represent log-probabilities"),
                   ArgDoc("Vec td","initial values of the transformation distribution")));
    
    declareMethod(rmm,
                  "returnPredictedFrom",
                  &TransformationLearner::returnPredictedFrom,
                  (BodyDoc("generates a sample data point from a source data point and returns it \n"
                           " - a specific transformation is used"),
                   ArgDoc("const Vec source","source data point"),
                   ArgDoc("int transformIdx","index of the transformation (optional)"),
                   RetDoc("Vec")));
    declareMethod(rmm,
                  "returnGeneratedSamplesFrom",
                  &TransformationLearner::returnGeneratedSamplesFrom,
                  (BodyDoc("generates samples data points form a source data point and return them \n"
                           "    -we use a specific transformation"),
                   ArgDoc("Vec source","source data point"),
                   ArgDoc("int n","number of samples"),
                   ArgDoc("int transformIdx", "index of the transformation (optional)"),
                   RetDoc("nXd matrix (one row = one sample)")));
    declareMethod(rmm,
                  "pickTransformIdx",
                  &TransformationLearner::pickTransformIdx,
                  (BodyDoc("select a transformation ramdomly"),
                   RetDoc("int (index of the choosen transformation)")));
               
    declareMethod(rmm,
                  "pickNeighborIdx",
                  &TransformationLearner::pickNeighborIdx,
                  (BodyDoc("select a neighbor among the data points in the training set"),
                   RetDoc("int (index of the data point in the training set)")));
    declareMethod(rmm,
                  "returnTreeDataSet",
                  &TransformationLearner::returnTreeDataSet,
                  (BodyDoc("creates and returns a data set using a 'tree generation process'\n"
                           " see 'treeDataSet()' implantation for more details"),
                   ArgDoc("Vec root","data point from which all the other data points will derive (directly or indirectly)"),
                   ArgDoc("int deepness","deepness of the tree reprenting the samples created"),
                   ArgDoc("int branchingFactor","branching factor of the tree representing the samples created"),
                   ArgDoc("int transformIdx", "index of the transformation to use (optional)"),
                   RetDoc("Mat (one row = one sample)")));
    declareMethod(rmm,
                  "returnSequenceDataSet",
                  &TransformationLearner::returnSequenceDataSet,
                  (BodyDoc("creates and returns a data set using a 'sequential procedure' \n"
                           "see 'sequenceDataSet()' implantation for more details"),
                   ArgDoc("const Vec start","data point from which all the other data points will derice (directly or indirectly)"),
                   ArgDoc("int n","number of sample data points to generate"),
                   ArgDoc("int transformIdx","index of the transformation to use (optional)"),
                   RetDoc("nXd matrix (one row = one sample)")));
    declareMethod(rmm,
                  "returnTrainingPoint",
                  &TransformationLearner::returnTrainingPoint,
                  (BodyDoc("returns the 'idx'th data point in the training set"),
                   ArgDoc("int idx","index of the data point in the training set"),
                   RetDoc("Vec")));
    declareMethod(rmm,
                  "returnReconstructionCandidates",
                  &TransformationLearner::returnReconstructionCandidates,
                  (BodyDoc("return all the reconstructions candidates associated to a given target"),
                   ArgDoc("int targetIdx","index of the target data point in the training set"),
                   RetDoc("TVec<ReconstructionCandidate>")));
    declareMethod(rmm,
                  "returnReconstructions",
                  &TransformationLearner::returnReconstructions,
                  (BodyDoc("returns the reconstructions of the 'targetIdx'th data point in the training set \n"
                           "(one reconstruction per reconstruction candidate)"),
                   ArgDoc("int targetIdx","index of the target data point in the training set"),
                   RetDoc("Mat (ith row = reconstruction associated to the ith reconstruction candidate)")));
    declareMethod(rmm,
                  "returnNeighbors",
                  &TransformationLearner::returnNeighbors,
                  (BodyDoc("returns the choosen neighbors of the target\n"
                           "  (one neighbor per reconstruction candidate)"),
                   ArgDoc("int targetIdx","index of the target in the training set"),
                   RetDoc("Mat (ith row = neighbor associated to the ith reconstruction candidate)")));
    declareMethod(rmm,
                  "returnTransform",
                  &TransformationLearner::returnTransform,
                  (BodyDoc("returns the parameters of the 'transformIdx'th transformation"),
                   ArgDoc("int transformIdx","index of the transformation"),
                   RetDoc("Mat")));
    declareMethod(rmm,
                  "returnAllTransforms",
                  &TransformationLearner::returnAllTransforms,
                  (BodyDoc("returns the parameters of each transformation"),
                   RetDoc("mdXd matrix, m = number of transformations \n"
                          "             d = dimensionality of the input space")));
    
    declareMethod(rmm,"buildLearnedParameters",
                  &TransformationLearner::buildLearnedParameters,
                  (BodyDoc("builds the structures related to learned parameters")));
    declareMethod(rmm,
                  "generatorBuild",
                  &TransformationLearner::generatorBuild,
                  (BodyDoc("generator specific initialization operations"),
                   ArgDoc("int inputSpaceDim","dimensionality of the input space"),
                   ArgDoc("TVec<Mat> transforms_", "transformations matrices"),
                   ArgDoc("Mat biasSet_","transformations bias"),
                   ArgDoc("real noiseVariance_","noise variance"),
                   ArgDoc("transformDistribution_", "transformation distribution")));
    declareMethod(rmm,
                  "gamma_sample",
                  &TransformationLearner::gamma_sample,
                  (BodyDoc("returns a pseudo-random positive real value using the distribution p(x)=Gamma(x |alpha,beta)"),
                   ArgDoc("real alpha",">=1"),
                   ArgDoc("real beta",">= 0 (optional: default value==1)"),
                   RetDoc("real >=0")));
    declareMethod(rmm,
                  "return_dirichlet_sample",
                  &TransformationLearner::return_dirichlet_sample,
                  (BodyDoc("returns a pseudo-random positive real vector using the distribution p(x)=Dirichlet(x|alpha)"),
                   ArgDoc("real alpha","all the parameters of the distribution are equal to 'alpha'"),
                   RetDoc("Vec (each element is between 0 and 1 , the elements sum to one)")));
/* declareMethod(rmm,
   "return_dirichlet_sample",
   &TransformationLearner::return_dirichlet_sample,
   (BodyDoc("returns a pseudo-random positive real vector using the distribution p(x)=Dirichlet(x|alphas)"),
   ArgDoc("Vec alphas","parameters of the distribution"),
   RetDoc("Vec (each element is between 0 and 1, the elements sum to one )"))); */
    declareMethod(rmm,
                  "initEStep",
                  &TransformationLearner::initEStep,
                  (BodyDoc("initial expectation step")));
    declareMethod(rmm,
                  "EStep",
                  &TransformationLearner::EStep,
                  (BodyDoc("coordination of the different kinds of expectation steps")));
    declareMethod(rmm,
                  "largeEStepA",
                  &TransformationLearner::largeEStepA,
                  (BodyDoc("update the reconstruction set \n"
                           "for each target, keeps the most probable <neighbor, transformation> pairs")));
    declareMethod(rmm,
                  "largeEStepB",
                  &TransformationLearner::largeEStepB,
                  (BodyDoc("update the reconstruction set \n"
                           "for each <target,transformation> pairs,choose the most probable neighbors ")));
    declareMethod(rmm,
                  "smallEStep",
                  &TransformationLearner::smallEStep,
                  (BodyDoc("update the weights of the reconstruction candidates")));
    declareMethod(rmm,
                  "MStep",
                  &TransformationLearner::MStep,
                  (BodyDoc("coordination of the different kinds of maximization step")));
    declareMethod(rmm,
                  "MStepTransformDistribution",
                  &TransformationLearner::MStepTransformDistribution,
                  (BodyDoc("maximization step with respect to transformation distribution parameters")));
    declareMethod(rmm,
                  "MStepTransformations",
                  &TransformationLearner::MStepTransformations,
                  (BodyDoc("maximization step with respect to transformation matrices (MAP version)")));
    declareMethod(rmm,
                  "MStepTransformationDiv",
                  &TransformationLearner::MStepTransformationDiv,
                  (BodyDoc("maximization step with respect to a specific transformation matrix (MAP version + emphasis on diversity)"),
                   ArgDoc("int transformIdx","index of the transformation matrix to optimize")));
    declareMethod(rmm,
                  "MStepBias",
                  &TransformationLearner::MStepBias,
                  (BodyDoc("maximization step with respect to transformation bias (MAP version)")));
    declareMethod(rmm,
                  "MStepNoiseVariance",
                  &TransformationLearner::MStepNoiseVariance,
                  (BodyDoc("maximization step with respect to noise variance")));
    declareMethod(rmm,
                  "nextStage",
                  &TransformationLearner::nextStage,
                  (BodyDoc("increment 'stage' by one")));

}

Here is the call graph for this function:

void PLearn::TransformationLearner::declareOptions ( OptionList ol) [static, protected]

Declares the class options.

Reimplemented from PLearn::PDistribution.

Definition at line 93 of file TransformationLearner.cc.

References behavior, biasOffset, biasPeriod, biasSet, PLearn::OptionBase::buildoption, PLearn::declareOption(), PLearn::PDistribution::declareOptions(), diversityFactor, emphasisOnDiversity, initializationMode, inputSpaceDim, largeEStepAOffset, largeEStepAPeriod, largeEStepBPeriod, learnNoiseVariance, PLearn::OptionBase::learntoption, learnTransformDistribution, minimumProba, nbNeighbors, nbTransforms, noiseAlpha, noiseBeta, noiseVariance, noiseVarianceOffset, noiseVariancePeriod, reconstructionSet, regOnNoiseVariance, regOnTransformDistribution, PLearn::PLearner::train_set, transformDistribution, transformDistributionAlpha, transformDistributionOffset, transformDistributionPeriod, transformFamily, transforms, transformsOffset, transformsPeriod, transformsSet, transformsVariance, and withBias.

{
    // ### Declare all of this object's options here.
    // ### For the "flags" of each option, you should typically specify
    // ### one of OptionBase::buildoption, OptionBase::learntoption or
    // ### OptionBase::tuningoption. If you don't provide one of these three,
    // ### this option will be ignored when loading values from a script.
    // ### You can also combine flags, for example with OptionBase::nosave:
    // ### (OptionBase::buildoption | OptionBase::nosave)

    // ### ex:
    // declareOption(ol, "myoption", &TransformationLearner::myoption,
    //               OptionBase::buildoption,
    //               "Help text describing this option");
    // ...


    //buildoption
  

    declareOption(ol,
                  "behavior",
                  &TransformationLearner::behavior,
                  OptionBase::buildoption,
                  "a transformationLearner might behave as a learner or as a generator");
    declareOption(ol,
                  "minimumProba",
                  &TransformationLearner::minimumProba,
                  OptionBase::buildoption,
                  "initial weight that will be needed sometimes");
    declareOption(ol,
                  "transformFamily",
                  &TransformationLearner::transformFamily,
                  OptionBase::buildoption,
                  "global form of the transformation functions");
    declareOption(ol,
                  "withBias",
                  &TransformationLearner::withBias,
                  OptionBase::buildoption,
                  "yes/no: add a bias to the transformation function ?");
    declareOption(ol,
                  "learnNoiseVariance",
                  &TransformationLearner::learnNoiseVariance,
                  OptionBase::buildoption,
                  "the noise variance is ...fixed/learned ?");
    declareOption(ol,
                  "regOnNoiseVariance",
                  &TransformationLearner::regOnNoiseVariance,
                  OptionBase::buildoption,
                  "yes/no: prior assumptions on the noise variance?");
    declareOption(ol,
                  "learnTransformDistribution",
                  &TransformationLearner::learnTransformDistribution,
                  OptionBase::buildoption,
                  "the transformation distribution is ... fixed/learned ?");
    declareOption(ol,
                  "regOnTransformDistribution",
                  &TransformationLearner::regOnTransformDistribution,
                  OptionBase::buildoption,
                  "yes/no: prior assumptions on the transformation distribution ?");
    
    declareOption(ol,
                  "emphasisOnDiversity",
                  &TransformationLearner::emphasisOnDiversity,
                  OptionBase::buildoption,
                  "increase probability of a set of transformations if they are more diversified \n"
                  "note: -the learning process is changed :\n"
                  "       the transformation functions can no more be updated at the same time \n"
                  "      -we assume there are no bias added to the transformation functions \n");

    declareOption(ol,
                  "diversityFactor",
                  &TransformationLearner::diversityFactor,
                  OptionBase::buildoption,
                  "positive real number: high value  gives  high importance to diversity among transformations \n"
                  "(has an effect only if the boolean 'emphasisOnDiversity' is set to True)\n");
    declareOption(ol,
                  "initializationMode",
                  &TransformationLearner::initializationMode,
                  OptionBase::buildoption,
                  "how the initial values of the parameters to learn are choosen?");
    
    declareOption(ol,
                  "largeEStepAPeriod",
                  &TransformationLearner::largeEStepAPeriod,
                  OptionBase::buildoption,
                  "time interval between two updates of the reconstruction set\n"
                  "(version A, method largeEStepA())");
    declareOption(ol,
                  "largeEStepAOffset",
                  &TransformationLearner::largeEStepAOffset,
                  OptionBase::buildoption,
                  "time of the first update of the reconstruction set"
                  "(version A, method largeEStepA())");
    declareOption(ol,
                  "largeEStepBPeriod",
                  &TransformationLearner::largeEStepBPeriod,
                  OptionBase::buildoption,
                  "time interval between two updates of the reconstruction set\n"
                  "(version  B, method largeEStepB())"); 
    declareOption(ol,
                  "noiseVariancePeriod",
                  &TransformationLearner::noiseVariancePeriod,
                  OptionBase::buildoption,
                  "time interval between two updates of the noise variance");
    declareOption(ol,
                  "noiseVarianceOffset",
                  &TransformationLearner::noiseVarianceOffset,
                  OptionBase::buildoption,
                  "time of the first update of the noise variance");
    declareOption(ol,
                  "noiseAlpha",
                  &TransformationLearner::noiseAlpha,
                  OptionBase::buildoption,
                  "parameter of the prior distribution of the noise variance");
   declareOption(ol,
                 "noiseBeta",
                 &TransformationLearner::noiseBeta,
                 OptionBase::buildoption,
                 "parameter of the prior distribution of the noise variance");
   declareOption(ol,
                 "transformDistributionPeriod",
                 &TransformationLearner::transformDistributionPeriod,
                 OptionBase::buildoption,
                 "time interval between two updates of the transformation distribution");
   declareOption(ol, 
                 "transformDistributionOffset",
                 &TransformationLearner::transformDistributionOffset,
                 OptionBase::buildoption,
                 "time of the first update of the transformation distribution");
   declareOption(ol, 
                 "transformDistributionAlpha",
                 &TransformationLearner::transformDistributionAlpha,
                 OptionBase::buildoption,
                 "parameter of the prior distribution of the transformation distribution");
   declareOption(ol,
                 "transformsPeriod",
                 &TransformationLearner::transformsPeriod,
                 OptionBase::buildoption,
                 "time interval between two updates of the transformations matrices");
   declareOption(ol,
                 "transformsOffset",
                 &TransformationLearner::transformsOffset,
                 OptionBase::buildoption,
                 "time of the first update of the transformations matrices");

   declareOption(ol,
                 "biasPeriod",
                 &TransformationLearner::biasPeriod,
                 OptionBase::buildoption,
                 "time interval between two updates of the transformations bias");
   declareOption(ol,
                 "biasOffset",
                 &TransformationLearner::biasOffset,
                 OptionBase::buildoption,
                 "time of the first update of the transformations bias");

   declareOption(ol, 
                 "noiseVariance",
                 &TransformationLearner::noiseVariance,
                 OptionBase::buildoption,
                 "noise variance (noise = random variable normally distributed)");
   declareOption(ol, 
                 "transformsVariance",
                 &TransformationLearner::transformsVariance,
                 OptionBase::buildoption,
                 "variance on the transformation parameters (normally distributed)");
   declareOption(ol, 
                 "nbTransforms",
                 &TransformationLearner::nbTransforms,
                 OptionBase::buildoption,
                 "how many transformations?");
   declareOption(ol, 
                 "nbNeighbors",
                 &TransformationLearner::nbNeighbors,
                 OptionBase::buildoption,
                 "how many neighbors?");
   declareOption(ol, 
                 "transformDistribution",
                 &TransformationLearner::transformDistribution,
                 OptionBase::buildoption,
                 "transformation distribution");
   
   //learntoption
   declareOption(ol,
                 "train_set",
                 &TransformationLearner::train_set,
                 OptionBase::learntoption,
                 "We remember the training set, as this is a memory-based distribution." );
   declareOption(ol,
                 "transformsSet",
                 &TransformationLearner::transformsSet,
                 OptionBase::learntoption,
                 "set of transformations \n)"
                 "implemented as a mdXd matrix,\n"
                 "     where m is the number of transformations\n"
                 "           and d is dimensionality of the input space");
   declareOption(ol,
                 "transforms",
                 &TransformationLearner::transforms,
                 OptionBase::learntoption,
                 "set of transformations\n"
                 "vector form of the previous set:\n)"
                 "    kth element of the vector = view on the kth sub-matrix");
   declareOption(ol,
                 "biasSet",
                 &TransformationLearner::biasSet,
                 OptionBase::learntoption,
                 "set of bias (one by transformation)");
   declareOption(ol,
                 "inputSpaceDim",
                 &TransformationLearner::inputSpaceDim,
                 OptionBase::learntoption,
                 "dimensionality of the input space");
   
   declareOption(ol,
                 "reconstructionSet",
                 &TransformationLearner::reconstructionSet,
                 OptionBase::learntoption,
                 "set of weighted reconstruction candidates");
 
   // Now call the parent class' declareOptions().
   inherited::declareOptions(ol);
}

Here is the call graph for this function:

static const PPath& PLearn::TransformationLearner::declaringFile ( ) [inline, static]

Reimplemented from PLearn::PDistribution.

Definition at line 403 of file TransformationLearner.h.

:
    //#####  Protected Options  ###############################################
TransformationLearner * PLearn::TransformationLearner::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::PDistribution.

Definition at line 50 of file TransformationLearner.cc.

void PLearn::TransformationLearner::dirichlet_sample ( real  alpha,
Vec sample 
) const [private]

GENERATE DIRICHLET RANDOM VARIABLES source of the algorithm: WIKIPEDIA.

GENERATE DIRICHLET RANDOM VARIABLES.

returns a pseudo-random positive real vector x using the distribution p(x) = Dirichlet(x| all the parameters = alpha) -all the element of the vector are between 0 and 1, -the elements of the vector sum to 1

source of the algorithm: WIKIPEDIA returns a pseudo-random positive real vector x using the distribution p(x) = Dirichlet(x| all the parameters = alpha) -all the element of the vector are between 0 and 1, -the elements of the vector sum to 1

Definition at line 1507 of file TransformationLearner.cc.

References d, gamma_sample(), i, PLearn::TVec< T >::length(), and PLearn::sum().

Referenced by initTransformDistribution(), and return_dirichlet_sample().

                                                                         {
    int d = sample.length();
    real sum = 0;
    for(int i=0;i<d;i++){
        sample[i]=gamma_sample(alpha);
        sum += sample[i];
    }
    for(int i=0;i<d;i++){
        sample[i]/=sum;
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

real PLearn::TransformationLearner::DIV_weights ( real  numWeight,
real  denomWeight 
) const [inline, private]

GET CORRESPONDING PROBABILITY.

arithmetic operations on reconstruction weights : DIVISION In our particular case: numWeight = log(w1) denomWeight = log(w2) and we want weight = log(w1/w2) = log(w1) - log(w2) = numweight - denomWeight

Definition at line 1586 of file TransformationLearner.cc.

Referenced by MStepTransformDistributionMAP(), and normalizeTargetWeights().

{
    return numWeight - denomWeight;
}

Here is the caller graph for this function:

void PLearn::TransformationLearner::EStep ( ) [private]

E STEP.

ESTEP.

coordination of the different kinds of expectation steps -which are : largeEStepA, largeEStepB, smallEStep

Definition at line 1886 of file TransformationLearner.cc.

References largeEStepA(), largeEStepAOffset, largeEStepAPeriod, largeEStepB(), largeEStepBOffset, largeEStepBPeriod, smallEStep(), and PLearn::PLearner::stage.

Referenced by declareMethods(), and train().

Here is the call graph for this function:

Here is the caller graph for this function:

real PLearn::TransformationLearner::expandTargetNeighborPairInReconstructionSet ( int  targetIdx,
int  neighborIdx,
int  candidateStartIdx 
) [private]

auxialiary function of "initEStep" .

for a given pair (target, neighbor), creates all the possible reconstruction candidates. returns the total weight of the reconstruction candidates created

Definition at line 1822 of file TransformationLearner.cc.

References INIT_weight(), nbTransforms, randomWeight(), reconstructionSet, and SUM_weights().

Referenced by initEStepA().

{
    int candidateIdx = candidateStartIdx;
    real weight, totalWeight = INIT_weight(0);  
    for(int transformIdx=0; transformIdx<nbTransforms; transformIdx ++){
       
        weight = randomWeight(); 
        totalWeight = SUM_weights(totalWeight,weight);
        reconstructionSet[candidateIdx] = ReconstructionCandidate(targetIdx, 
                                                                  neighborIdx,
                                                                  transformIdx,
                                                                  weight);
    
        candidateIdx ++;
    }
    return totalWeight;    
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::findBestTargetReconstructionCandidates ( int  targetIdx,
priority_queue< ReconstructionCandidate > &  pq 
) [private]

auxiliary function of largeEStepA() for a given target, stores the km most probable (neighbors, transformation) pairs in a priority queue (k = nb neighbors, m = nb transformations)

Definition at line 1935 of file TransformationLearner.cc.

References computeReconstructionWeight(), fbtrc_neighbor, fbtrc_predictedTarget, fbtrc_target, nbTargetReconstructions, nbTransforms, PLASSERT, PLearn::PLearner::random_gen, seeTrainingPoint(), and trainingSetLength.

Referenced by largeEStepA().

{
    //we want an empty queue
    PLASSERT(pq.empty()); 
    
    real weight;
    seeTrainingPoint(targetIdx, fbtrc_target);
    //for each potential neighbor
    for(int neighborIdx=0; neighborIdx<trainingSetLength; neighborIdx++){
        if(neighborIdx != targetIdx){
            seeTrainingPoint(neighborIdx, fbtrc_neighbor);
            for(int transformIdx=0; transformIdx<nbTransforms; transformIdx++){
                weight = computeReconstructionWeight(fbtrc_target, 
                                                     fbtrc_neighbor, 
                                                     transformIdx,
                                                     fbtrc_predictedTarget);
                
                //if the weight is among "nbEntries" biggest weight seen,
                //keep it until to see a bigger neighbor. 
                if(int(pq.size()) < nbTargetReconstructions){
                    pq.push(ReconstructionCandidate(targetIdx,
                                                    neighborIdx,
                                                    transformIdx,
                                                    weight));  
                }
                else if (weight > pq.top().weight){ 
                    pq.pop();
                    pq.push(ReconstructionCandidate(targetIdx,
                                                    neighborIdx,
                                                    transformIdx,
                                                    weight));
                }
                else if (weight == pq.top().weight){
                    if(random_gen->uniform_sample()>0.5){
                        pq.pop();
                        pq.push(ReconstructionCandidate(targetIdx,
                                                        neighborIdx,
                                                        transformIdx,
                                                        weight));
                    }
                }
            }
        }     
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::findBestWeightedNeighbors ( int  targetIdx,
int  transformIdx,
priority_queue< ReconstructionCandidate > &  pq 
) [private]

auxiliary function of largeEStepB() for a given target x and a given transformation t , stores the best weighted triples (x, neighbor, t) in a priority queue .

Definition at line 2023 of file TransformationLearner.cc.

References computeReconstructionWeight(), fbwn_neighbor, fbwn_predictedTarget, fbwn_target, nbNeighbors, PLASSERT, PLearn::PLearner::random_gen, seeTrainingPoint(), and trainingSetLength.

Referenced by largeEStepB().

{
    //we want an empty queue
    PLASSERT(pq.empty()); 
    
    real weight; 
    seeTrainingPoint(targetIdx, fbwn_target);
    //for each potential neighbor
    for(int neighborIdx=0; neighborIdx<trainingSetLength; neighborIdx++){
        if(neighborIdx != targetIdx){ //(the target cannot be his own neighbor)
            seeTrainingPoint(neighborIdx, fbwn_neighbor);
            weight = computeReconstructionWeight(fbwn_target, 
                                                 fbwn_neighbor, 
                                                 transformIdx,
                                                 fbwn_predictedTarget);
            //if the weight of the triple is among the "nbNeighbors" biggest 
            //seen,keep it until see a bigger weight. 
            if(int(pq.size()) < nbNeighbors){
                pq.push(ReconstructionCandidate(targetIdx,
                                                neighborIdx, 
                                                transformIdx,
                                                weight));
            }
            else if (weight > pq.top().weight){
                pq.pop();
                pq.push(ReconstructionCandidate(targetIdx,
                                                neighborIdx,
                                                transformIdx,
                                                weight));
            }
            else if (weight == pq.top().weight){
                if(random_gen->uniform_sample() > 0.5){
                    pq.pop();
                    pq.push(ReconstructionCandidate(targetIdx,
                                                    neighborIdx,
                                                    transformIdx,
                                                    weight));
                }
            }
        }
    }   
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::findNearestNeighbors ( int  targetIdx,
priority_queue< pair< real, int > > &  pq 
) [private]

auxiliary function of initEStep stores the nearest neighbors for a given target point in a priority queue.

Definition at line 1846 of file TransformationLearner.cc.

References PLearn::dist(), fnn_neighbor, fnn_target, i, nbNeighbors, PLASSERT, PLearn::powdistance(), PLearn::PLearner::random_gen, seeTrainingPoint(), and trainingSetLength.

Referenced by initEStepA().

{
    
    //we want an empty queue
    PLASSERT(pq.empty()); 
  
    //capture the target from his index in the training set
    seeTrainingPoint(targetIdx, fnn_target);
    
    //for each potential neighbor,
    real dist;    
    for(int i=0; i<trainingSetLength; i++){
        if(i != targetIdx){ //(the target cannot be his own neighbor)
            //computes the distance to the target
            seeTrainingPoint(i, fnn_neighbor);
            dist = powdistance(fnn_target, fnn_neighbor); 
            //if the distance is among "nbNeighbors" smallest distances seen,
            //keep it until to see a closer neighbor. 
            if(int(pq.size()) < nbNeighbors){
                pq.push(pair<real,int>(dist,i));
            }
            else if (dist < pq.top().first){
                pq.pop();
                pq.push(pair<real,int>(dist,i));
            }
            else if(dist == pq.top().first){
                if(random_gen->uniform_sample() >0.5){
                    pq.pop();
                    pq.push(pair<real,int>(dist,i));
                }
            }
        }
    }    
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::forget ( ) [virtual]

(Re-)initializes the PDistribution in its fresh state (that state may depend on the 'seed' option).

And sets 'stage' back to 0 (this is the stage of a fresh learner!). ### You may remove this method if your distribution does not ### implement it.

A typical forget() method should do the following:

  • initialize a random number generator with the seed option
  • initialize the learner's parameters, using this random generator
  • stage = 0

Reimplemented from PLearn::PDistribution.

Definition at line 585 of file TransformationLearner.cc.

References build(), PLearn::PDistribution::forget(), and PLearn::PLearner::stage.

{
    
    
    //PLERROR("forget method not implemented for TransformationLearner");
    
    inherited::forget();
    stage = 0;
    build();
   
    
}

Here is the call graph for this function:

real PLearn::TransformationLearner::gamma_sample ( real  alpha,
real  beta = 1 
) const [private]

GENERATE GAMMA RANDOM VARIABLES.

source of the algorithm: http://oldmill.uchicago.edu/~wilder/Code/random/Papers/Marsaglia_00_SMGGV.pdf returns a pseudo-random positive real number x using the distribution p(x)=Gamma(alpha,beta)

Definition at line 1480 of file TransformationLearner.cc.

References c, d, pl_log, PLearn::pow(), PLearn::PLearner::random_gen, u, and x.

Referenced by declareMethods(), dirichlet_sample(), and initNoiseVariance().

{
  real c,x,u,d,v;
  c = 1.0/3.0;
  d = alpha - c ;
  do{
      x = random_gen->gaussian_01();
      u = random_gen->uniform_sample();    
      v = pow((1 + x/(pow(9*d , 0.5)))  ,3.0);
  }
  while(pl_log(u) < 0.5*pow(x,2) + d - d*v + d*pl_log(v));
  return d*v/beta;   
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::generate ( Vec y) const [virtual]

Return a pseudo-random sample generated from the conditional distribution, of density p(y | x).

generate a point using the training set:

  • choose ramdomly a neighbor among data points in the training set
  • choose randomly a transformation
  • apply the transformation on the choosen neighbor
  • add some noise

Reimplemented from PLearn::PDistribution.

Definition at line 613 of file TransformationLearner.cc.

References generatePredictedFrom(), inputSpaceDim, PLearn::TVec< T >::length(), pickNeighborIdx(), PLASSERT, PLearn::TVec< T >::resize(), and seeTrainingPoint().

{
    //PLERROR("generate not implemented for TransformationLearner");
    PLASSERT(y.length() == inputSpaceDim);
    int neighborIdx ;
    neighborIdx=pickNeighborIdx();
    Vec neighbor;
    neighbor.resize(inputSpaceDim);
    seeTrainingPoint(neighborIdx, neighbor);
    generatePredictedFrom(neighbor, y);
}

Here is the call graph for this function:

void PLearn::TransformationLearner::generatePredictedFrom ( const Vec source,
Vec sample 
) const

GENERATION FUNCTIONS.

generates a sample data point from a source data point

Definition at line 1162 of file TransformationLearner.cc.

References pickTransformIdx().

Referenced by batchGeneratePredictedFrom(), generate(), and returnPredictedFrom().

{
    
    int transformIdx = pickTransformIdx();
    generatePredictedFrom(source, sample, transformIdx);
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::generatePredictedFrom ( const Vec source,
Vec sample,
int  transformIdx 
) const

generates a sample data point from a source data point with a specific transformation

Definition at line 1171 of file TransformationLearner.cc.

References applyTransformationOn(), d, i, inputSpaceDim, PLearn::TVec< T >::length(), nbTransforms, noiseVariance, PLASSERT, PLearn::pow(), and PLearn::PLearner::random_gen.

{
    //TODO
    real noiseSD = pow(noiseVariance,0.5);
    int d = source.length();
    PLASSERT(d == inputSpaceDim);
    PLASSERT(sample.length() == inputSpaceDim);
    PLASSERT(0<= transformIdx && transformIdx<nbTransforms);
    
    //apply the transformation
    applyTransformationOn(transformIdx,source,sample);
    
    //add noise
    for(int i=0; i<d; i++){
        sample[i] += random_gen->gaussian_mu_sigma(0, noiseSD);
    } 
}

Here is the call graph for this function:

void PLearn::TransformationLearner::generatorBuild ( int  inputSpaceDim_ = 2,
TVec< Mat transforms_ = TVec<Mat>(),
Mat  biasSet_ = Mat(),
real  noiseVariance_ = -1.0,
Vec  transformDistribution_ = Vec() 
)

initialization operations that have to be done before a generation process (all the undefined parameters will be initialized randomly)

Definition at line 998 of file TransformationLearner.cc.

References biasSet, initNoiseVariance(), initTransformDistribution(), initTransformsParameters(), inputSpaceDim, PLearn::TVec< T >::length(), nbTransforms, noiseAlpha, noiseBeta, PLearn::TVec< T >::resize(), setNoiseVariance(), setTransformDistribution(), setTransformsParameters(), PLearn::sqrt(), PLearn::TMat< T >::subMatRows(), transformDistributionAlpha, transforms, transformsSD, transformsSet, transformsVariance, and withBias.

Referenced by build_(), and declareMethods().

                                                                       {
    
    inputSpaceDim = inputSpaceDim_;
    transformsSD = sqrt(transformsVariance);
    

    //transformations parameters

    
    transformsSet = Mat(nbTransforms * inputSpaceDim, inputSpaceDim);
    transforms.resize(nbTransforms);
    for(int k = 0; k< nbTransforms; k++){
        transforms[k] = transformsSet.subMatRows(k * inputSpaceDim, inputSpaceDim);       
    }
    
    if(withBias){
        biasSet = Mat(nbTransforms,inputSpaceDim);
    }
    else{
        biasSet = Mat(0,0);
    }
    if(transforms_.length() == 0){
        initTransformsParameters();
    }
    else{
        setTransformsParameters(transforms_,biasSet_);
    }
    

    //noise variance
    if(noiseAlpha < 1){
            noiseAlpha = 1;
        }
    if(noiseBeta <= 0){
        noiseBeta = 1;
    }
    if(noiseVariance_ <= 0){
        initNoiseVariance();
    }
    else{
        setNoiseVariance(noiseVariance_);
    }
    //transformation distribution
    if(transformDistributionAlpha <=0)
        transformDistributionAlpha = 10;
    if(transformDistribution_.length()==0){
        initTransformDistribution();
    }
    else{
        setTransformDistribution(transformDistribution_);
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

OptionList & PLearn::TransformationLearner::getOptionList ( ) const [virtual]

Reimplemented from PLearn::PDistribution.

Definition at line 50 of file TransformationLearner.cc.

OptionMap & PLearn::TransformationLearner::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::PDistribution.

Definition at line 50 of file TransformationLearner.cc.

RemoteMethodMap & PLearn::TransformationLearner::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::PDistribution.

Definition at line 50 of file TransformationLearner.cc.

real PLearn::TransformationLearner::INIT_weight ( real  initValue) const [inline, private]

arithmetic operations on reconstruction weights

arithmetic operations on reconstruction weights : CONSTRUCTOR proba->weight

Definition at line 1568 of file TransformationLearner.cc.

References pl_log.

Referenced by buildLearnedParameters(), expandTargetNeighborPairInReconstructionSet(), initEStepA(), initTransformDistribution(), largeEStepA(), largeEStepB(), log_density(), MStepBias(), MStepTransformDistributionMAP(), randomWeight(), and smallEStep().

{
    return pl_log(initValue);
}

Here is the caller graph for this function:

void PLearn::TransformationLearner::initEStep ( ) [private]

INITIAL E STEP.

initialization of the reconstruction set

Definition at line 1755 of file TransformationLearner.cc.

References INIT_MODE_DEFAULT, initEStepA(), initEStepB(), and initializationMode.

Referenced by declareMethods(), and train().

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::initEStepA ( ) [private]

initialization of the reconstruction set, version A

Definition at line 1780 of file TransformationLearner.cc.

References expandTargetNeighborPairInReconstructionSet(), findNearestNeighbors(), INIT_weight(), nbNeighbors, nbTransforms, normalizeTargetWeights(), SUM_weights(), and trainingSetLength.

Referenced by initEStep(), and initEStepB().

{
   
    priority_queue< pair< real,int > > pq = priority_queue< pair< real,int > >();
    
    real totalWeight;
    int candidateIdx=0,targetStartIdx, neighborIdx;
    
    //for each point in the training set i.e. for each target point,
    for(int targetIdx = 0; targetIdx < trainingSetLength ;targetIdx++){
        
        //finds the nearest neighbors and keep them in a priority queue 
        findNearestNeighbors(targetIdx, pq);
        
        //expands those neighbors in the dataset:
        //(i.e. for each neighbor, creates one entry per transformation and 
        //assignsit a positive random weight)
        
        totalWeight = INIT_weight(0);
        targetStartIdx = candidateIdx;
        for(int k = 0; k < nbNeighbors; k++){
            neighborIdx = pq.top().second;
            pq.pop();
            totalWeight =
                SUM_weights(totalWeight,
                            expandTargetNeighborPairInReconstructionSet(targetIdx, 
                                                                        neighborIdx,
                                                                        candidateIdx));
            candidateIdx += nbTransforms;
        }
        //normalizes the  weights of all the entries created for the target 
        //point
        normalizeTargetWeights(targetIdx,totalWeight);
    }

}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::initEStepB ( ) [private]

initialization of the reconstruction set, version B

initialization of the reconstruction set, version B we suppose that all the parameters to learn are already initialized to some value

Definition at line 1770 of file TransformationLearner.cc.

References initEStepA(), and smallEStep().

Referenced by initEStep().

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::initNoiseVariance ( )

initializes the noise variance randomly (gamma distribution)

Definition at line 1122 of file TransformationLearner.cc.

References gamma_sample(), noiseAlpha, noiseBeta, noiseVariance, and PLASSERT.

Referenced by buildLearnedParameters(), declareMethods(), and generatorBuild().

{
    real noisePrecision = gamma_sample(noiseAlpha, noiseBeta);
    PLASSERT(noisePrecision != 0);
    noiseVariance = 1.0/noisePrecision;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::initTransformDistribution ( )

initializes the transformation distribution randomly (dirichlet distribution)

Definition at line 1139 of file TransformationLearner.cc.

References dirichlet_sample(), i, INIT_weight(), nbTransforms, PLearn::TVec< T >::resize(), transformDistribution, and transformDistributionAlpha.

Referenced by buildLearnedParameters(), declareMethods(), and generatorBuild().

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::initTransformsParameters ( )

INITIAL VALUES OF THE PARAMETERS TO LEARN.

initializes the transformation parameters randomly (prior distribution= Normal(0,transformsVariance))

Definition at line 1058 of file TransformationLearner.cc.

References PLearn::addToDiagonal(), biasSet, inputSpaceDim, nbTransforms, PLearn::PLearner::random_gen, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), PLearn::TMat< T >::subMatRows(), TRANSFORM_FAMILY_LINEAR, transformFamily, transforms, transformsSD, transformsSet, and withBias.

Referenced by buildLearnedParameters(), declareMethods(), and generatorBuild().

{
    
    transformsSet .resize(nbTransforms*inputSpaceDim, inputSpaceDim);
    transforms.resize(nbTransforms);
    for(int k = 0; k< nbTransforms; k++){
        transforms[k] = transformsSet.subMatRows(k * inputSpaceDim, inputSpaceDim);       
    }
    for(int t=0; t<nbTransforms ; t++){
        random_gen->fill_random_normal(transforms[t], 0 , transformsSD);
    }
    if(withBias){
        biasSet = Mat(nbTransforms,inputSpaceDim);
        random_gen->fill_random_normal(biasSet, 0,transformsSD);
    }
    else{
        biasSet = Mat(0,0);
    }
    if(transformFamily == TRANSFORM_FAMILY_LINEAR){
        for(int t=0; t<nbTransforms;t++){
            addToDiagonal(transforms[t],1.0);
        }
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

int PLearn::TransformationLearner::inputsize ( ) const [virtual]

Generates a pseudo-random sample x from the reversed conditional distribution, of density p(x | y) (and NOT p(y | x)).

i.e., generates a "predictor" part given a "predicted" part, regardless of any previously set predictor. Reset the random number generator used by generate() using the given seed.

Reimplemented from PLearn::PLearner.

Definition at line 628 of file TransformationLearner.cc.

References inputSpaceDim.

                                           {
    return inputSpaceDim;
}
bool PLearn::TransformationLearner::isWellDefined ( Vec distribution) const [private]

verify if the multinomial distribution given is well-defined i.e.

verify that the weights represent probabilities, and that those probabilities sum to 1 . (the distribution is represented as a set of weights, which are typically log-probabilities)

verify that the weights represent probabilities, and that those probabilities sum to 1 . (typical case: the distribution is represented as a set of weights, which are typically log-probabilities)

Definition at line 1734 of file TransformationLearner.cc.

References i, PLearn::TVec< T >::length(), nbTransforms, PROBA_weight(), and PLearn::sum().

Referenced by buildLearnedParameters(), and setTransformDistribution().

{  
    if(nbTransforms != distribution.length()){
        return false;
    }
    real sum = 0;
    real proba;
    for(int i=0; i<nbTransforms;i++){
        proba = PROBA_weight(distribution[i]);
        if(proba < 0 || proba >1){
            return false;
        }
        sum += proba;
    }
    return sum == 1;    
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::largeEStepA ( ) [private]

LARGE E STEP : VERSION A (expectation step)

full update of the reconstruction set for each target, keeps the km most probable <neighbor, transformation> pairs (k = nb neighbors, m= nb transformations)

Definition at line 1903 of file TransformationLearner.cc.

References findBestTargetReconstructionCandidates(), INIT_weight(), nbTargetReconstructions, normalizeTargetWeights(), reconstructionSet, SUM_weights(), PLearn::TVec< T >::top(), and trainingSetLength.

Referenced by declareMethods(), and EStep().

{
    priority_queue< ReconstructionCandidate > pq =  
        priority_queue< ReconstructionCandidate >();
    real totalWeight= INIT_weight(0);
    int candidateIdx=0;
    
    //for each point in the training set i.e. for each target point,
    for(int targetIdx = 0; targetIdx < trainingSetLength ; targetIdx++){
        
        //finds the best weighted triples and keep them in a priority queue 
        findBestTargetReconstructionCandidates(targetIdx, pq);
        //store those triples in the dataset:
        totalWeight = INIT_weight(0);
        for(int k=0; k < nbTargetReconstructions; k++){
            reconstructionSet[candidateIdx] = pq.top(); 
            totalWeight = SUM_weights(pq.top().weight, totalWeight);
            pq.pop();         
            candidateIdx ++;
        }
        
        //normalizes the  weights of all the entries created for the 
        //target point;
        normalizeTargetWeights(targetIdx,totalWeight);
    } 
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::largeEStepB ( ) [private]

LARGE E STEP : VERSION B (expectation step)

full update of the reconstruction set for each given pair (target, transformation), find the best weighted neighbors

Definition at line 1990 of file TransformationLearner.cc.

References findBestWeightedNeighbors(), INIT_weight(), nbNeighbors, nbTransforms, normalizeTargetWeights(), reconstructionSet, SUM_weights(), PLearn::TVec< T >::top(), and trainingSetLength.

Referenced by declareMethods(), and EStep().

{
    priority_queue< ReconstructionCandidate > pq;
    
    real totalWeight , weight;
    int candidateIdx=0 ;
    
    //for each point in the training set i.e. for each target point,
    for(int targetIdx =0; targetIdx<trainingSetLength ;targetIdx++){
        
        totalWeight = INIT_weight(0);
        for(int transformIdx=0; transformIdx < nbTransforms; transformIdx ++){
            //finds the best weighted triples   them in a priority queue 
            findBestWeightedNeighbors(targetIdx,transformIdx, pq);
            //store those neighbors in the dataset
            for(int k=0; k<nbNeighbors; k++){
                reconstructionSet[candidateIdx] = pq.top();
                weight = pq.top().weight;
                totalWeight = SUM_weights( weight, totalWeight);
                pq.pop();
                candidateIdx ++;
            }
        }
      //normalizes the  weights of all the entries created for the target 
      //point;
        normalizeTargetWeights(targetIdx,totalWeight);
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

real PLearn::TransformationLearner::log_density ( const Vec y) const [virtual]

Return log of probability density log(p(y | x)).

Reimplemented from PLearn::PDistribution.

Definition at line 638 of file TransformationLearner.cc.

References computeReconstructionWeight(), INIT_weight(), inputSpaceDim, PLearn::TVec< T >::length(), MULT_weights(), nbTransforms, noiseVariance, Pi, pl_log, PLASSERT, PLearn::pow(), seeTrainingPoint(), ses_neighbor, ses_predictedTarget, SUM_weights(), trainingSetLength, and transformDistribution.

{
    PLASSERT(y.length() == inputSpaceDim);
    real weight;
    real totalWeight = INIT_weight(0);
    real scalingFactor = -1*(pl_log(pow(2*Pi*noiseVariance, inputSpaceDim/2.0)) 
                             +
                             pl_log(trainingSetLength));
    for(int neighborIdx=0; neighborIdx<trainingSetLength; neighborIdx++){
        seeTrainingPoint(neighborIdx,ses_neighbor);
        for(int transformIdx=0 ; transformIdx<nbTransforms ; transformIdx++){
            weight = computeReconstructionWeight(y,
                                                 ses_neighbor,
                                                 transformIdx,
                                                 ses_predictedTarget);
            weight = MULT_weights(weight,
                                  transformDistribution[transformIdx]);
            totalWeight = SUM_weights(weight,totalWeight);
        }  
    }
    totalWeight = MULT_weights(totalWeight, scalingFactor);
    return totalWeight;
}

Here is the call graph for this function:

void PLearn::TransformationLearner::mainLearnerBuild ( )

main initialization operations that have to be done before any training phase

initialization operations that have to be done before the training WARNING: the trainset ("train_set") must be given

Definition at line 812 of file TransformationLearner.cc.

References B, B_C, biasOffset, biasPeriod, C, diversityFactor, emphasisOnDiversity, fbtrc_neighbor, fbtrc_predictedTarget, fbtrc_target, fbwn_neighbor, fbwn_predictedTarget, fbwn_target, fnn_neighbor, fnn_target, inputSpaceDim, learnNoiseVariance, learnTransformDistribution, PLearn::VMat::length(), lg_neighbor, lg_predictedTarget, msb_neighbor, msb_newBiasSet, msb_norms, msb_reconstruction, msb_target, msnvMAP_neighbor, msnvMAP_reconstruction, msnvMAP_target, msnvMAP_total_k, mst_neighbor, mst_pivots, mst_target, mst_v, mstd_B, mstd_C, mstd_D, mstd_neighbor, mstd_pivots, mstd_target, mstd_v, nbNeighbors, nbReconstructions, nbTargetReconstructions, nbTransforms, newDistribution, NOISE_ALPHA_NO_REG, NOISE_BETA_NO_REG, noiseAlpha, noiseBeta, noiseVarianceOffset, noiseVariancePeriod, PLASSERT, regOnNoiseVariance, regOnTransformDistribution, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), ses_neighbor, ses_predictedTarget, ses_target, PLearn::sqrt(), PLearn::TMat< T >::subMatRows(), PLearn::PLearner::train_set, trainingSetLength, TRANSFORM_DISTRIBUTION_ALPHA_NO_REG, transformDistributionAlpha, transformDistributionOffset, transformDistributionPeriod, transformsOffset, transformsPeriod, transformsSD, transformsVariance, UNDEFINED, and withBias.

Referenced by build_().

                                            {

    //dimension of the input space
    inputSpaceDim = train_set->inputsize();

    //some storage variables that we will re-use to save time
    newDistribution.resize(nbTransforms) ;
    ses_target.resize(inputSpaceDim);
    ses_neighbor.resize(inputSpaceDim);
    ses_predictedTarget.resize(inputSpaceDim);
    lg_neighbor.resize(inputSpaceDim);
    lg_predictedTarget.resize(inputSpaceDim);
    fnn_target.resize(inputSpaceDim);
    fnn_neighbor.resize(inputSpaceDim);
    fbtrc_neighbor.resize(inputSpaceDim);
    fbtrc_target.resize(inputSpaceDim);
    fbtrc_predictedTarget.resize(inputSpaceDim);
    fbwn_target.resize(inputSpaceDim);
    fbwn_neighbor.resize(inputSpaceDim);
    fbwn_predictedTarget.resize(inputSpaceDim);
    mst_v.resize(inputSpaceDim);
    mst_target.resize(inputSpaceDim);
    mst_neighbor.resize(inputSpaceDim);
    mst_pivots.resize(inputSpaceDim);
    msb_newBiasSet.resize(nbTransforms,inputSpaceDim);
    msb_norms.resize(nbTransforms);
    msb_target.resize(inputSpaceDim);
    msb_neighbor.resize(inputSpaceDim);
    msb_reconstruction.resize(inputSpaceDim);
    msnvMAP_total_k.resize(inputSpaceDim);
    msnvMAP_target.resize(inputSpaceDim);
    msnvMAP_neighbor.resize(inputSpaceDim);
    msnvMAP_reconstruction.resize(inputSpaceDim);
    mstd_B.resize(inputSpaceDim,inputSpaceDim);
    mstd_C.resize(inputSpaceDim,inputSpaceDim);
    mstd_D.resize(inputSpaceDim,inputSpaceDim);
    mstd_v.resize(inputSpaceDim);
    mstd_target.resize(inputSpaceDim);
    mstd_neighbor.resize(inputSpaceDim);
    mstd_pivots.resize(inputSpaceDim);
    
    //put more emphasis on diversity among transformation?
    if(emphasisOnDiversity){
        PLASSERT(!withBias);
        if(diversityFactor<=0){
            diversityFactor = 1.0/transformsVariance;  
        }
    }
    else{
        diversityFactor = 0;
    }


    int defaultPeriod = 1;
    int defaultTransformsOffset=0;
    int defaultBiasOffset=0;
    int defaultNoiseVarianceOffset=0;
    int defaultTransformDistributionOffset=0;

    defaultTransformsOffset = 0;
    
    if(withBias){
        defaultBiasOffset = defaultPeriod ;
        defaultPeriod++;
    }
    if(learnNoiseVariance){
        defaultNoiseVarianceOffset = defaultPeriod;
        defaultPeriod++;
    }
    if(learnTransformDistribution){
        defaultTransformDistributionOffset = defaultPeriod;
        defaultPeriod ++;
    }
    
    
    transformsSD = sqrt(transformsVariance);
    
    //DIMENSION VARIABLES
          
    //number of samples given in the training set
    trainingSetLength = train_set->length();
    
    
    //number of reconstruction candidates related to a specific target in the 
    //reconstruction set.   
    nbTargetReconstructions = nbNeighbors * nbTransforms;

    //total number of reconstruction candidates in the reconstruction set
    nbReconstructions = trainingSetLength * nbTargetReconstructions;
    
    
    

    if(withBias){
        if(biasPeriod == UNDEFINED || biasOffset == UNDEFINED){
            biasPeriod = defaultPeriod;
            biasOffset = defaultBiasOffset;
        }
    }

    else{
        biasPeriod = UNDEFINED ;
        biasOffset = UNDEFINED;
    }

 

   
    if(transformsPeriod == UNDEFINED || transformsOffset == UNDEFINED){
        transformsPeriod = defaultPeriod;
        transformsOffset = defaultTransformsOffset;
    }

    //training parameters for noise variance
    if(learnNoiseVariance){
        if(noiseVariancePeriod == UNDEFINED || noiseVarianceOffset == UNDEFINED){
            noiseVariancePeriod = defaultPeriod;
            noiseVarianceOffset = defaultNoiseVarianceOffset;
        }
        if(regOnNoiseVariance){
            if(noiseAlpha < 1)
                noiseAlpha = 1;
            if(noiseBeta <= 0){
                noiseBeta = 1;
            }
        }
        else{
            noiseAlpha = NOISE_ALPHA_NO_REG;
            noiseBeta = NOISE_BETA_NO_REG;
        }
    }
    else{
        noiseVariancePeriod = UNDEFINED;
        noiseVarianceOffset = UNDEFINED;
    }
    
 
    
     //training parameters for transformation distribution
     if(learnTransformDistribution){
         if(transformDistributionPeriod == UNDEFINED || transformDistributionOffset == UNDEFINED){
             transformDistributionPeriod = defaultPeriod;
             transformDistributionOffset = defaultTransformDistributionOffset;
         }
         if(regOnTransformDistribution){
             if(transformDistributionAlpha<=0){
                 transformDistributionAlpha =10;
             }
             else{
                 transformDistributionAlpha = TRANSFORM_DISTRIBUTION_ALPHA_NO_REG;
             }
         }
     }
     else{
         transformDistributionPeriod = UNDEFINED;
         transformDistributionOffset = UNDEFINED;
     }


 
   
    
    

    //OTHER VARIABLES
    
    
     
    //Storage space used in the update of the transformation parameters
    B_C = Mat(2 * nbTransforms * inputSpaceDim , inputSpaceDim);
    
    B.resize(nbTransforms);
    C.resize(nbTransforms);
    for(int k=0; k<nbTransforms; k++){
        B[k]= B_C.subMatRows(k*inputSpaceDim, inputSpaceDim);
    }
    for(int k= nbTransforms ; k<2*nbTransforms ; k++){
        C[(k % nbTransforms)] = B_C.subMatRows(k*inputSpaceDim, inputSpaceDim);
    }
    
    
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Transforms a shallow copy into a deep copy.

Reimplemented from PLearn::PDistribution.

Definition at line 667 of file TransformationLearner.cc.

References PLearn::PDistribution::makeDeepCopyFromShallowCopy().

{
    inherited::makeDeepCopyFromShallowCopy(copies);

    // ### Call deepCopyField on all "pointer-like" fields
    // ### that you wish to be deepCopied rather than
    // ### shallow-copied.
    // ### ex:
    // deepCopyField(trainvec, copies);
    

    // ### Remove this line when you have fully implemented this method.
    //PLERROR("TransformationLearner::makeDeepCopyFromShallowCopy not fully (correctly) implemented yet!");
}

Here is the call graph for this function:

void PLearn::TransformationLearner::MStep ( ) [private]

M STEP.

coordination of the different kinds of maximization step (i.e.

coordination of the different kinds of maximization step (i.e.: we optimize with respect to which parameter?)

: we optimize with respect to which parameter?)

Definition at line 2108 of file TransformationLearner.cc.

References biasOffset, biasPeriod, emphasisOnDiversity, MStepBias(), MStepNoiseVariance(), MStepTransformationDiv(), MStepTransformations(), MStepTransformDistribution(), nbTransforms, noiseVarianceOffset, noiseVariancePeriod, PLearn::PLearner::stage, transformDistributionOffset, transformDistributionPeriod, transformsOffset, and transformsPeriod.

Referenced by declareMethods(), and train().

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::MStepBias ( ) [private]

maximization step with respect to transformation bias (MAP version)

Definition at line 2322 of file TransformationLearner.cc.

References applyTransformationOn(), biasSet, PLearn::TVec< T >::fill(), PLearn::TMat< T >::fill(), INIT_weight(), msb_neighbor, msb_newBiasSet, msb_norms, msb_reconstruction, msb_target, nbReconstructions, nbTransforms, noiseVariance, PROBA_weight(), reconstructionSet, seeTrainingPoint(), SUM_weights(), and transformsVariance.

Referenced by declareMethods(), and MStep().

                                     {
    msb_newBiasSet.fill(0);
    msb_norms.fill(INIT_weight(0));
    int transformIdx;
    real proba,weight;
    for(int idx=0; idx<nbReconstructions; idx++){
        transformIdx = reconstructionSet[idx].transformIdx;
        weight = reconstructionSet[idx].weight;
        proba = PROBA_weight(weight);
        seeTrainingPoint(reconstructionSet[idx].targetIdx,msb_target);
        seeTrainingPoint(reconstructionSet[idx].neighborIdx, msb_neighbor);
        applyTransformationOn(transformIdx,msb_neighbor, msb_reconstruction);
        msb_newBiasSet(transformIdx) += proba*(msb_target - msb_reconstruction);
        msb_norms[transformIdx] = SUM_weights(msb_norms[transformIdx],weight);
    }
    for(int t=0; t<nbTransforms ; t++){
        msb_newBiasSet(t) /= ((noiseVariance/transformsVariance) 
                              +
                              PROBA_weight(msb_norms[t]));
    }
    biasSet << msb_newBiasSet;   
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::MStepNoiseVariance ( ) [private]

maximization step with respect to noise variance

Definition at line 2347 of file TransformationLearner.cc.

References MStepNoiseVarianceMAP(), noiseAlpha, and noiseBeta.

Referenced by declareMethods(), and MStep().

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::MStepNoiseVarianceMAP ( real  alpha,
real  beta 
) [private]

maximization step with respect to noise variance (MAP version, alpha and beta = gamma prior distribution parameters) NOTE : alpha=1, beta=0 -> no regularization

Definition at line 2355 of file TransformationLearner.cc.

References PLearn::TVec< T >::fill(), inputSpaceDim, msnvMAP_neighbor, msnvMAP_reconstruction, msnvMAP_target, msnvMAP_total_k, nbTargetReconstructions, noiseVariance, PROBA_weight(), reconstructionEuclideanDistance(), reconstructionSet, seeTrainingPoint(), PLearn::sum(), and trainingSetLength.

Referenced by MStepNoiseVariance().

{
    
    msnvMAP_total_k.fill(0);
    int transformIdx;
    real proba;
    int candidateIdx=0;
    for(int targetIdx=0; targetIdx<trainingSetLength; targetIdx ++){
        seeTrainingPoint(targetIdx,msnvMAP_target);
        for(int idx=0; idx < nbTargetReconstructions; idx++){
            transformIdx = reconstructionSet[candidateIdx].transformIdx;
            seeTrainingPoint(reconstructionSet[candidateIdx].neighborIdx , msnvMAP_neighbor);
            proba = PROBA_weight(reconstructionSet[candidateIdx].weight);
            msnvMAP_total_k[transformIdx]+=(proba * reconstructionEuclideanDistance(msnvMAP_target,
                                                                                    msnvMAP_neighbor,
                                                                                    transformIdx,
                                                                                    msnvMAP_reconstruction));
            candidateIdx ++;
        }
    }
    noiseVariance = (2*beta + sum(msnvMAP_total_k))/(2*alpha - 2 + trainingSetLength*inputSpaceDim);  
        
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::MStepTransformationDiv ( int  transformIdx) [private]

maximization step with respect to a specific transformation matrix

TODO.

  • it is a MAP version, prior probability of the matrix put more emphasis on a matrix that diverges from the other transformations matrices

maximization step with respect to a specific transformation matrix - it is also a MAP version, but this time the prior probability of the matrix is different: we put more probability on a matrix that diverges from the other transformations matrices

  • for the moment, we assume that they are no bias associated to the function

Definition at line 2259 of file TransformationLearner.cc.

References PLearn::addToDiagonal(), PLearn::TMat< T >::clear(), diversityFactor, PLearn::externalProductScaleAcc(), PLearn::lapackSolveLinearSystem(), mstd_B, mstd_C, mstd_D, mstd_neighbor, mstd_pivots, mstd_target, mstd_v, nbReconstructions, nbTransforms, noiseVariance, PROBA_weight(), reconstructionSet, seeTrainingPoint(), TRANSFORM_FAMILY_LINEAR_INCREMENT, transformFamily, transforms, and transformsVariance.

Referenced by declareMethods(), and MStep().

                                                                  {
    //set the m dXd matrices Ck and Bk , k in{1, ...,m} to 0.
    mstd_B.clear();
    mstd_C.clear();
    mstd_D.clear();
    
    for(int t=0; t<nbTransforms ; t++){
        if(t != transformIdx){
            mstd_D += transforms[t];
        }
    }
    mstd_D *= -2*diversityFactor*noiseVariance;
   

    //real lambda = noiseVariance*(1.0/transformsVariance -2*(nbTransforms - 1)*diversityFactor);
    real lambda = noiseVariance/transformsVariance ;
    
    for(int idx=0 ; idx<nbReconstructions ; idx++){
        
        //catch a view on the next entry of our dataset, that is, a  triple:
        //(target_idx, neighbor_idx, transformation_idx)
        
        real p = PROBA_weight(reconstructionSet[idx].weight);
  
        //catch the target and neighbor points from the training set
        
        seeTrainingPoint(reconstructionSet[idx].targetIdx, mstd_target);
        seeTrainingPoint(reconstructionSet[idx].neighborIdx, mstd_neighbor);
        
        if( reconstructionSet[idx].transformIdx == transformIdx){
            mstd_v << mstd_target;
            if(transformFamily == TRANSFORM_FAMILY_LINEAR_INCREMENT){
                mstd_v = mstd_v - mstd_neighbor;
            }
     
            //at the end, we want that matrix C[t] represents
            //the matrix ( (NeighborPart(t)_T)W(NeighborPart(t)) + lambdaI ) transposed. 
            externalProductScaleAcc(mstd_C, mstd_neighbor, mstd_neighbor, p);
            
            //at the end, that matrix B[t] represents
            //the matrix (NeighborPart(t)_T)W(TargetPart(t)) transposed.
            //externalProductScaleAcc(B[t], neighbor, v,p);
            externalProductScaleAcc(mstd_B,mstd_v,mstd_neighbor,p);
        }
        
    }

    addToDiagonal(mstd_C,lambda);
    //transforms[t] << solveLinearSystem(C[t], B[t]); 
    mstd_B += mstd_D;
    lapackSolveLinearSystem(mstd_C,mstd_B, mstd_pivots);
    transforms[transformIdx] << mstd_B;
    
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::MStepTransformations ( ) [private]

maximization step with respect to transformation matrices (MAP version)

maximization step with respect to transformation parameters (MAP version)

Definition at line 2171 of file TransformationLearner.cc.

References PLearn::addToDiagonal(), B, B_C, biasSet, C, PLearn::TMat< T >::clear(), PLearn::externalProductScaleAcc(), PLearn::lapackSolveLinearSystem(), mst_neighbor, mst_pivots, mst_target, mst_v, nbReconstructions, nbTransforms, noiseVariance, PROBA_weight(), reconstructionSet, seeTrainingPoint(), TRANSFORM_FAMILY_LINEAR_INCREMENT, transformFamily, transforms, transformsVariance, and withBias.

Referenced by declareMethods(), and MStep().

{
    
    //set the m dXd matrices Ck and Bk , k in{1, ...,m} to 0.
    B_C.clear();
    
    real lambda = 1.0*noiseVariance/transformsVariance;
    for(int idx=0 ; idx<nbReconstructions ; idx++){
        
        //catch a view on the next entry of our dataset, that is, a  triple:
        //(target_idx, neighbor_idx, transformation_idx)
        
        real p = PROBA_weight(reconstructionSet[idx].weight);
  
        //catch the target and neighbor points from the training set
        
        seeTrainingPoint(reconstructionSet[idx].targetIdx, mst_target);
        seeTrainingPoint(reconstructionSet[idx].neighborIdx, mst_neighbor);
        
        int t = reconstructionSet[idx].transformIdx;
        
        mst_v << mst_target;
        if(transformFamily == TRANSFORM_FAMILY_LINEAR_INCREMENT){
            mst_v = mst_v - mst_neighbor;
        }
        if(withBias){
            mst_v = mst_v - biasSet(t);
        }
        //at the end, we want that matrix C[t] represents
        //the matrix ( (NeighborPart(t)_T)W(NeighborPart(t)) + lambdaI ) transposed. 
        externalProductScaleAcc(C[t], mst_neighbor, mst_neighbor, p);
        
        //at the end, that matrix B[t] represents
        //the matrix (NeighborPart(t)_T)W(TargetPart(t)) transposed.
        //externalProductScaleAcc(B[t], neighbor, v,p);
        externalProductScaleAcc(B[t],mst_v,mst_neighbor,p);
    }
    
 
    for(int t=0; t<nbTransforms; t++){
        addToDiagonal(C[t],lambda);
        //transforms[t] << solveLinearSystem(C[t], B[t]);  
        lapackSolveLinearSystem(C[t],B[t],mst_pivots);
        transforms[t] << B[t];
        
    }  
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::MStepTransformDistribution ( ) [private]

maximization step with respect to transformation distribution parameters

Definition at line 2130 of file TransformationLearner.cc.

References MStepTransformDistributionMAP(), and transformDistributionAlpha.

Referenced by declareMethods(), and MStep().

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::MStepTransformDistributionMAP ( real  alpha) [private]

maximization step with respect to transformation distribution parameters (MAP version, alpha = dirichlet prior distribution parameter) NOTE : alpha =1 -> no regularization

Definition at line 2139 of file TransformationLearner.cc.

References DIV_weights(), PLearn::TVec< T >::fill(), INIT_weight(), nbReconstructions, nbTransforms, newDistribution, reconstructionSet, SUM_weights(), trainingSetLength, and transformDistribution.

Referenced by MStepTransformDistribution().

{
    newDistribution.fill(INIT_weight(0));
        
    int transformIdx;
    real weight;
    for(int idx =0 ;idx < nbReconstructions ; idx ++){
        transformIdx = reconstructionSet[idx].transformIdx;
        weight = reconstructionSet[idx].weight;
        newDistribution[transformIdx] = 
            SUM_weights(newDistribution[transformIdx],
                        weight);
    }

    real addFactor = INIT_weight(alpha - 1);
    real divisionFactor = INIT_weight(nbTransforms*(alpha - 1) + trainingSetLength); 

    for(int k=0; k<nbTransforms ; k++){
        newDistribution[k]= DIV_weights(SUM_weights(addFactor,
                                                    newDistribution[k]),
                                        divisionFactor);
    }
    transformDistribution << newDistribution ;
}

Here is the call graph for this function:

Here is the caller graph for this function:

real PLearn::TransformationLearner::MULT_INVERSE_weight ( real  weight) const [inline, private]

DIVISION.

arithmetic operations on reconstruction weights :MULTIPLICATIVE INVERSE weight = log(p) we want : weight' = log(1/p) = log(1) - log(p) = 0 - log(p) = -weight

Definition at line 1598 of file TransformationLearner.cc.

{
    
    return -1*weight;
}
real PLearn::TransformationLearner::MULT_weights ( real  weight1,
real  weight2 
) const [inline, private]

MULTIPLICATIVE INVERSE.

arithmetic operations on reconstruction weights: MULTIPLICATION weight1 = log(p1) weight2 = log(p2) we want weight3 = log(p1*p2) = log(p1) + log(p2) = weight1 + weight2

Definition at line 1609 of file TransformationLearner.cc.

Referenced by computeReconstructionWeight(), and log_density().

{
    
    return weight1 + weight2 ;
}

Here is the caller graph for this function:

void PLearn::TransformationLearner::nextStage ( ) [private]

increments the variable 'stage' of 1

Definition at line 2409 of file TransformationLearner.cc.

References PLearn::PLearner::stage.

Referenced by declareMethods().

                                     {
    stage ++;
}

Here is the caller graph for this function:

void PLearn::TransformationLearner::normalizeTargetWeights ( int  targetIdx,
real  totalWeight 
) [inline, private]

OPERATIONS ON WEIGHTS.

normalizes the reconstruction weights related to a given target.

Definition at line 1547 of file TransformationLearner.cc.

References DIV_weights(), nbTargetReconstructions, reconstructionSet, and w.

Referenced by initEStepA(), largeEStepA(), largeEStepB(), and smallEStep().

{
    real w;
    int startIdx = targetIdx * nbTargetReconstructions;
    int endIdx = startIdx + nbTargetReconstructions;
    for(int candidateIdx =startIdx; candidateIdx<endIdx; candidateIdx++){
        w = reconstructionSet[candidateIdx].weight;
        reconstructionSet[candidateIdx].weight =  DIV_weights(w,totalWeight);
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

int PLearn::TransformationLearner::pickNeighborIdx ( ) const

Select a neighbor in the training set randomly (return his index in the training set) We suppose all data points in the training set are equiprobables.

Definition at line 1268 of file TransformationLearner.cc.

References PLearn::PLearner::random_gen, and trainingSetLength.

Referenced by declareMethods(), and generate().

{
    
    return random_gen->uniform_multinomial_sample(trainingSetLength);
}

Here is the caller graph for this function:

int PLearn::TransformationLearner::pickTransformIdx ( ) const

select a transformation randomly (with respect to our multinomial distribution)

Definition at line 1253 of file TransformationLearner.cc.

References i, nbTransforms, PROBA_weight(), PLearn::PLearner::random_gen, PLearn::TVec< T >::resize(), transformDistribution, and w.

Referenced by declareMethods(), and generatePredictedFrom().

{
    
    Vec probaTransformDistribution ;
    probaTransformDistribution.resize(nbTransforms);
    for(int i=0; i<nbTransforms; i++){
        probaTransformDistribution[i]=PROBA_weight(transformDistribution[i]);
    }
    int w= random_gen->multinomial_sample(probaTransformDistribution);
    return w;
}

Here is the call graph for this function:

Here is the caller graph for this function:

real PLearn::TransformationLearner::PROBA_weight ( real  weight) const [inline, private]

CONSTRUCTOR.

arithmetic operations on reconstruction weights :GET CORRESPONDING PROBABILITY weight->proba

Definition at line 1575 of file TransformationLearner.cc.

References PLearn::exp().

Referenced by isWellDefined(), MStepBias(), MStepNoiseVarianceMAP(), MStepTransformationDiv(), MStepTransformations(), and pickTransformIdx().

{
    return exp(weight); 
}

Here is the call graph for this function:

Here is the caller graph for this function:

real PLearn::TransformationLearner::randomWeight ( ) const [inline, private]

returns a random weight

Definition at line 1560 of file TransformationLearner.cc.

References INIT_weight(), minimumProba, PLearn::PLearner::random_gen, and w.

Referenced by expandTargetNeighborPairInReconstructionSet().

{  
    real w = random_gen->uniform_sample();
    return INIT_weight((w + minimumProba)/(1.0 + minimumProba));
}

Here is the call graph for this function:

Here is the caller graph for this function:

real PLearn::TransformationLearner::reconstructionEuclideanDistance ( const Vec target,
const Vec neighbor,
int  transformIdx,
Vec reconstruction 
) const [inline, private]

Definition at line 2394 of file TransformationLearner.cc.

References applyTransformationOn(), and PLearn::powdistance().

{
    applyTransformationOn(transformIdx,
                          neighbor,
                          reconstruction);
    return powdistance(target,reconstruction);

}

Here is the call graph for this function:

real PLearn::TransformationLearner::reconstructionEuclideanDistance ( int  candidateIdx) [inline, private]

returns the distance between the reconstruction and the target for the 'candidateIdx'th reconstruction candidate

Definition at line 2381 of file TransformationLearner.cc.

References applyTransformationOn(), inputSpaceDim, PLearn::powdistance(), reconstructionSet, and seeTrainingPoint().

Referenced by MStepNoiseVarianceMAP().

                                                                           {
    Vec target(inputSpaceDim);
    seeTrainingPoint(reconstructionSet[candidateIdx].targetIdx, target);
    Vec neighbor(inputSpaceDim);
    seeTrainingPoint(reconstructionSet[candidateIdx].neighborIdx,
                     neighbor);
    Vec reconstruction(inputSpaceDim);
    applyTransformationOn(reconstructionSet[candidateIdx].transformIdx,
                          neighbor,
                          reconstruction);
    return powdistance(target, reconstruction);
}

Here is the call graph for this function:

Here is the caller graph for this function:

Vec PLearn::TransformationLearner::return_dirichlet_sample ( real  alpha) const [private]

Definition at line 1519 of file TransformationLearner.cc.

References dirichlet_sample(), inputSpaceDim, PLearn::TVec< T >::resize(), and PLearn::sample().

Referenced by declareMethods().

{
    Vec sample ;
    sample.resize(inputSpaceDim);
    dirichlet_sample(alpha, sample);
    return sample;
}

Here is the call graph for this function:

Here is the caller graph for this function:

Mat PLearn::TransformationLearner::returnAllTransforms ( ) const

returns the parameters of each transformation (as an KdXd matrix, K = number of transformations, d = dimension of input space)

Definition at line 1445 of file TransformationLearner.cc.

References PLearn::TMat< T >::copy(), and transformsSet.

Referenced by declareMethods().

{
    return transformsSet.copy();    
}

Here is the call graph for this function:

Here is the caller graph for this function:

Mat PLearn::TransformationLearner::returnGeneratedSamplesFrom ( Vec  center,
int  n,
int  transformIdx = -1 
) const

Definition at line 1240 of file TransformationLearner.cc.

References batchGeneratePredictedFrom(), and inputSpaceDim.

Referenced by declareMethods().

{
    Mat samples = Mat(n,inputSpaceDim);
    if(transformIdx<0)
        batchGeneratePredictedFrom(center,samples);
    else
        batchGeneratePredictedFrom(center,samples,transformIdx);
    return samples;
}

Here is the call graph for this function:

Here is the caller graph for this function:

Mat PLearn::TransformationLearner::returnNeighbors ( int  targetIdx) const

returns the neighbors choosen to reconstruct the target (one choosen neighbor for each reconstruction candidate associated to the target)

Definition at line 1419 of file TransformationLearner.cc.

References i, inputSpaceDim, nbTargetReconstructions, reconstructionSet, PLearn::TVec< T >::resize(), and seeTrainingPoint().

Referenced by declareMethods().

{
    int candidateIdx = targetIdx*nbTargetReconstructions;
    int neighborIdx;
    Mat neighbors = Mat(nbTargetReconstructions, inputSpaceDim);
    for(int i=0; i<nbTargetReconstructions; i++){
        neighborIdx = reconstructionSet[candidateIdx].neighborIdx;
        Vec neighbor;
        neighbor.resize(inputSpaceDim);
        seeTrainingPoint(neighborIdx, neighbor);
        neighbors(i) << neighbor;
        candidateIdx++;
    }
    return neighbors;
}

Here is the call graph for this function:

Here is the caller graph for this function:

Vec PLearn::TransformationLearner::returnPredictedFrom ( Vec  source,
int  transformIdx = -1 
) const

generates a sample data point from a source data point and returns it (if transformIdx >= 0 , we use the corresponding transformation )

Definition at line 1193 of file TransformationLearner.cc.

References generatePredictedFrom(), inputSpaceDim, PLearn::TVec< T >::resize(), and PLearn::sample().

Referenced by declareMethods().

{
    Vec sample;
    sample.resize(inputSpaceDim);
    if(transformIdx <0)
        generatePredictedFrom(source,sample);
    else
        generatePredictedFrom(source,sample,transformIdx);
    return sample;
}

Here is the call graph for this function:

Here is the caller graph for this function:

TVec< ReconstructionCandidate > PLearn::TransformationLearner::returnReconstructionCandidates ( int  targetIdx) const

returns all the reconstructions candidates associated to a given target

Definition at line 1388 of file TransformationLearner.cc.

References PLearn::TVec< T >::copy(), nbTargetReconstructions, reconstructionSet, and PLearn::TVec< T >::subVec().

Referenced by declareMethods().

{
   
    int startIdx = targetIdx * nbTargetReconstructions;  
    return reconstructionSet.subVec(startIdx, 
                                    nbTargetReconstructions).copy();
}

Here is the call graph for this function:

Here is the caller graph for this function:

Mat PLearn::TransformationLearner::returnReconstructions ( int  targetIdx) const

returns the reconstructions of the "targetIdx"th data point value in the training set (one reconstruction for each reconstruction candidate)

Definition at line 1399 of file TransformationLearner.cc.

References applyTransformationOn(), i, inputSpaceDim, nbTargetReconstructions, reconstructionSet, PLearn::TVec< T >::resize(), and seeTrainingPoint().

Referenced by declareMethods().

{
    Mat reconstructions = Mat(nbTargetReconstructions,inputSpaceDim);
    int candidateIdx = targetIdx*nbTargetReconstructions;
    int neighborIdx, transformIdx;
    for(int i=0; i<nbTargetReconstructions; i++){
        neighborIdx = reconstructionSet[candidateIdx].neighborIdx;
        transformIdx= reconstructionSet[candidateIdx].transformIdx;
        Vec neighbor;
        neighbor.resize(inputSpaceDim);
        seeTrainingPoint(neighborIdx, neighbor);
        Vec v = reconstructions(i);
        applyTransformationOn(transformIdx, neighbor, v);
        candidateIdx ++;
    }
    return reconstructions; 
}

Here is the call graph for this function:

Here is the caller graph for this function:

Mat PLearn::TransformationLearner::returnSequenceDataSet ( Vec  start,
int  n,
int  transformIdx = -1 
) const

Definition at line 1359 of file TransformationLearner.cc.

References sequenceDataSet().

Referenced by declareMethods().

{
    Mat dataPoints;
    sequenceDataSet(start,n,dataPoints,transformIdx);
    return dataPoints;
}

Here is the call graph for this function:

Here is the caller graph for this function:

Vec PLearn::TransformationLearner::returnTrainingPoint ( int  idx) const

COPIES OF THE STRUCTURES.

returns the "idx"th data point in the training set

Definition at line 1375 of file TransformationLearner.cc.

References PLearn::VMat::getExample(), inputSpaceDim, PLearn::TVec< T >::resize(), PLearn::PLearner::train_set, and w.

Referenced by declareMethods().

{
    
    Vec v,temp;
    real w;
    v.resize(inputSpaceDim);
    train_set->getExample(idx, v, temp, w);
    return v;
    
}

Here is the call graph for this function:

Here is the caller graph for this function:

Mat PLearn::TransformationLearner::returnTransform ( int  transformIdx) const

returns the parameters of the "transformIdx"th transformation

Definition at line 1437 of file TransformationLearner.cc.

References PLearn::TVec< T >::copy(), and transforms.

Referenced by declareMethods().

{
    return transforms[transformIdx].copy();    
}

Here is the call graph for this function:

Here is the caller graph for this function:

Mat PLearn::TransformationLearner::returnTreeDataSet ( Vec  root,
int  deepness,
int  branchingFactor,
int  transformIdx = -1 
) const

Definition at line 1337 of file TransformationLearner.cc.

References treeDataSet().

Referenced by declareMethods().

{
    Mat dataPoints;
    treeDataSet(root,deepness,branchingFactor, dataPoints);
    return dataPoints;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::seeTargetReconstructionSet ( int  targetIdx,
TVec< ReconstructionCandidate > &  targetReconstructionSet 
) const [private]

VIEWS ON RECONSTRUCTION SET AND TRAINING SET.

stores a VIEW on the reconstruction candidates related to the specified target (into the variable "targetReconstructionSet" )

Definition at line 1457 of file TransformationLearner.cc.

References nbTargetReconstructions, reconstructionSet, and PLearn::TVec< T >::subVec().

{
    int startIdx = targetIdx *nbTargetReconstructions;
    targetReconstructionSet = reconstructionSet.subVec(startIdx, 
                                                       nbTargetReconstructions); 
}

Here is the call graph for this function:

void PLearn::TransformationLearner::seeTrainingPoint ( const int  idx,
Vec dst 
) const [inline, private]

stores the "idx"th training data point into the variable 'dst'

Definition at line 1467 of file TransformationLearner.cc.

References PLearn::VMat::getExample(), stp_v, stp_w, and PLearn::PLearner::train_set.

Referenced by computeReconstructionWeight(), findBestTargetReconstructionCandidates(), findBestWeightedNeighbors(), findNearestNeighbors(), generate(), log_density(), MStepBias(), MStepNoiseVarianceMAP(), MStepTransformationDiv(), MStepTransformations(), reconstructionEuclideanDistance(), returnNeighbors(), returnReconstructions(), and smallEStep().

{
    train_set->getExample(idx, dst,stp_v,stp_w);
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::sequenceDataSet ( const Vec start,
int  n,
Mat dataPoints,
int  transformIdx = -1 
) const

create a "sequential" dataset: start -> first point -> second point ...

create a "sequential" dataset: start -> second point -> third point ...

->nth point (where "->" stands for : "generate the")

Definition at line 1351 of file TransformationLearner.cc.

References treeDataSet().

Referenced by returnSequenceDataSet().

{
    treeDataSet(start,n-1,1,dataPoints , transformIdx);
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::setNoiseVariance ( real  nv)

initializes the noise variance with the given value

Definition at line 1130 of file TransformationLearner.cc.

References noiseVariance, and PLASSERT.

Referenced by declareMethods(), and generatorBuild().

{
    PLASSERT(nv > 0);
    noiseVariance = nv;
}

Here is the caller graph for this function:

void PLearn::TransformationLearner::setTransformDistribution ( Vec  td)

initializes the transformation distribution with the given values

Definition at line 1150 of file TransformationLearner.cc.

References isWellDefined(), PLearn::TVec< T >::length(), nbTransforms, PLASSERT, PLearn::TVec< T >::resize(), and transformDistribution.

Referenced by declareMethods(), and generatorBuild().

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::setTransformsParameters ( TVec< Mat transforms,
Mat  bias = Mat() 
)

initializes the transformation parameters to the given values (bias are set to 0)

Definition at line 1085 of file TransformationLearner.cc.

References biasSet, inputSpaceDim, PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), nbTransforms, PLASSERT, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), PLearn::TMat< T >::subMatRows(), transforms, transformsSet, PLearn::TMat< T >::width(), and withBias.

Referenced by declareMethods(), and generatorBuild().

{
    
    PLASSERT(transforms_.length() == nbTransforms);
    
    int nbRows = inputSpaceDim*nbTransforms;
    transformsSet.resize(nbRows,inputSpaceDim);
    transforms.resize(nbTransforms);
    for(int k = 0; k< nbTransforms; k++){
        transforms[k] = transformsSet.subMatRows(k * inputSpaceDim, inputSpaceDim);       
    }


    int rowIdx = 0;
    for(int t=0; t<nbTransforms; t++){
        PLASSERT(transforms_[t].width() == inputSpaceDim);
        PLASSERT(transforms_[t].length() == inputSpaceDim);
        transformsSet.subMatRows(rowIdx,inputSpaceDim) << transforms_[t];
        transforms[t]= transformsSet.subMatRows(rowIdx,inputSpaceDim);
        rowIdx += inputSpaceDim;
    }
    if(withBias){    
        PLASSERT(biasSet_.length() == nbTransforms);
        PLASSERT(biasSet_.width() == inputSpaceDim);
        biasSet = Mat(nbTransforms, inputSpaceDim);
        biasSet << biasSet_;
    }
    else{
        biasSet = Mat(0,0);
    }
    

}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::smallEStep ( ) [private]

SMALL E STEP (expectation step)

updating the weights while keeping the candidate neighbor set fixed

Definition at line 2074 of file TransformationLearner.cc.

References INIT_weight(), nbReconstructions, normalizeTargetWeights(), reconstructionSet, seeTrainingPoint(), ses_neighbor, ses_predictedTarget, ses_target, SUM_weights(), and updateReconstructionWeight().

Referenced by declareMethods(), EStep(), and initEStepB().

{
    int candidateIdx =0;
    int  targetIdx = reconstructionSet[candidateIdx].targetIdx;
    real totalWeight = INIT_weight(0);
    seeTrainingPoint(targetIdx,ses_target);
    
    while(candidateIdx < nbReconstructions){
        
        seeTrainingPoint(reconstructionSet[candidateIdx].neighborIdx, ses_neighbor);
        totalWeight = SUM_weights(totalWeight,
                                  updateReconstructionWeight(candidateIdx,
                                                             ses_target,
                                                             ses_neighbor,
                                                             reconstructionSet[candidateIdx].transformIdx,
                                                             ses_predictedTarget));
        candidateIdx ++;
    
        if(candidateIdx == nbReconstructions)
            normalizeTargetWeights(targetIdx,totalWeight);
        else if(targetIdx != reconstructionSet[candidateIdx].targetIdx){
            normalizeTargetWeights(targetIdx, totalWeight);
            totalWeight = INIT_weight(0);
            targetIdx = reconstructionSet[candidateIdx].targetIdx;
            seeTrainingPoint(targetIdx, ses_target);
        }
    }    
}

Here is the call graph for this function:

Here is the caller graph for this function:

real PLearn::TransformationLearner::SUM_weights ( real  weight1,
real  weight2 
) const [inline, private]

MULTIPLICATION.

arithmetic operations on reconstruction weights : SUM weight1 = log(p1) weight2 = log(p2) we want : weight3 = log(p1 + p2) = logAdd(weight1, weight2)

Definition at line 1619 of file TransformationLearner.cc.

References PLearn::logadd().

Referenced by expandTargetNeighborPairInReconstructionSet(), initEStepA(), largeEStepA(), largeEStepB(), log_density(), MStepBias(), MStepTransformDistributionMAP(), and smallEStep().

{
    
    return logadd(weight1,weight2);
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::TransformationLearner::train ( ) [virtual]

The role of the train method is to bring the learner up to stage == nstages, updating the train_stats collector with training costs measured on-line in the process.

Reimplemented from PLearn::PDistribution.

Definition at line 695 of file TransformationLearner.cc.

References buildLearnedParameters(), EStep(), initEStep(), MStep(), PLearn::PLearner::nstages, and PLearn::PLearner::stage.

{
    
  
    //PLERROR("train method not implemented for TransformationLearner");
    // The role of the train method is to bring the learner up to
    // stage==nstages, updating train_stats with training costs measured
    // on-line in the process.

    /* TYPICAL CODE:

    static Vec input;  // static so we don't reallocate memory each time...
    static Vec target; // (but be careful that static means shared!)
    input.resize(inputsize());    // the train_set's inputsize()
    target.resize(targetsize());  // the train_set's targetsize()
    real weight;

    // This generic PLearner method does a number of standard stuff useful for
    // (almost) any learner, and return 'false' if no training should take
    // place. See PLearner.h for more details.
    if (!initTrain())
        return;

    while(stage<nstages)
    {
        // clear statistics of previous epoch
        train_stats->forget();

        //... train for 1 stage, and update train_stats,
        // using train_set->getExample(input, target, weight)
        // and train_stats->update(train_costs)

        ++stage;
        train_stats->finalize(); // finalize statistics for this epoch
    }
    */

    if(stage==0)
        buildLearnedParameters();
        initEStep();
    while(stage<nstages)
    {
        MStep();
        EStep();
        stage ++;
    }
    
}

Here is the call graph for this function:

void PLearn::TransformationLearner::treeDataSet ( const Vec root,
int  deepness,
int  branchingFactor,
Mat dataPoints,
int  transformIdx = -1 
) const

creates a data set: equivalent in building a tree with fixed deepness and constant branching factor

creates a data set:

0 1 2 ...

r -> child1 -> child1 ... -> child2 ... ... ... -> childn ...

-> child2 -> child1 ... -> child2 ... ... ... -> childn ... ... -> childn -> child1 ... -> child2 ... ... ... -> childn ...

(where "a -> b" stands for "a generate b") all the child are generated by the same following process: 1) choose a transformation 2) apply the transformation to the parent 3) add noise to the result

equivalent in building a tree with fixed deepness and constant branching factor

0 1 2 ...

r -> child1 -> child1 ... -> child2 ... ... ... -> childn ...

-> child2 -> child1 ... -> child2 ... ... ... -> childn ... ... -> childn -> child1 ... -> child2 ... ... ... -> childn ...

(where "a -> b" stands for "a generate b") all the child are generated by the same following process: 1) choose a transformation 2) apply the transformation to the parent 3) add noise to the result

Definition at line 1300 of file TransformationLearner.cc.

References batchGeneratePredictedFrom(), inputSpaceDim, PLearn::TVec< T >::length(), m, PLASSERT, PLearn::pow(), PLearn::TMat< T >::resize(), and PLearn::TMat< T >::subMatRows().

Referenced by returnTreeDataSet(), and sequenceDataSet().

{

    PLASSERT(root.length() == inputSpaceDim);

    //we look at the length of the given matrix dataPoint ;
    int nbDataPoints;
    if(branchingFactor == 1)
        nbDataPoints = deepness + 1;  
    else nbDataPoints = int((1- pow(1.0*branchingFactor,deepness + 1.0))
                            /
                            (1 - branchingFactor));
    dataPoints.resize(nbDataPoints,inputSpaceDim);
    
    //root = first element in the matrix dataPoints
    dataPoints(0) << root;
  
    //generate the other data points 
    int centerIdx=0 ;
    for(int dataIdx=1; dataIdx < nbDataPoints ; dataIdx+=branchingFactor){
        
        Vec v = dataPoints(centerIdx);
        Mat m = dataPoints.subMatRows(dataIdx, branchingFactor);
        if(transformIdx>=0){
            batchGeneratePredictedFrom(v,m,transformIdx);
        }
        else{
            batchGeneratePredictedFrom(v,m);
        } 
        centerIdx ++ ;
    }  
}

Here is the call graph for this function:

Here is the caller graph for this function:

real PLearn::TransformationLearner::updateReconstructionWeight ( int  candidateIdx,
const Vec target,
const Vec neighbor,
int  transformIdx,
Vec predictedTarget 
) [inline, private]

NOT A USER METHOD !

Definition at line 1644 of file TransformationLearner.cc.

References computeReconstructionWeight(), reconstructionSet, and w.

                                                                             {
    
    real w = computeReconstructionWeight(target,
                                         neighbor,
                                         transformIdx,
                                         predictedTarget);
    reconstructionSet[candidateIdx].weight = w;
    return w;
}

Here is the call graph for this function:

real PLearn::TransformationLearner::updateReconstructionWeight ( int  candidateIdx) [inline, private]

SUM.

update/compute the weight of a reconstruction candidate with the actual transformation parameters

Definition at line 1630 of file TransformationLearner.cc.

References computeReconstructionWeight(), reconstructionSet, and w.

Referenced by smallEStep().

{
    int targetIdx = reconstructionSet[candidateIdx].targetIdx;
    int neighborIdx = reconstructionSet[candidateIdx].neighborIdx;
    int transformIdx = reconstructionSet[candidateIdx].transformIdx;
    
    real w = computeReconstructionWeight(targetIdx,
                                         neighborIdx,
                                         transformIdx);
    reconstructionSet[candidateIdx].weight = w;
    return w; 
}

Here is the call graph for this function:

Here is the caller graph for this function:


Member Data Documentation

Reimplemented from PLearn::PDistribution.

Definition at line 403 of file TransformationLearner.h.

Vectors of matrices that will be used in transformations parameters updating process.

Each matrix is a view on a sub-matrix in th bigger matrix "B_C" described above.

Definition at line 641 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepTransformations().

Storage space that will be used in the maximization step, in transformation parameters updating process.

It represents a set of sub-matrices.There are exactly 2 sub-matrices by transformation.

Definition at line 638 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepTransformations().

A transformation learner might behave as a learner,as well as a generator.

Definition at line 184 of file TransformationLearner.h.

Referenced by build_(), and declareOptions().

Definition at line 274 of file TransformationLearner.h.

Referenced by declareOptions(), mainLearnerBuild(), and MStep().

Definition at line 273 of file TransformationLearner.h.

Referenced by declareOptions(), mainLearnerBuild(), and MStep().

views on sub-matrices of the matrix transformsSet

set of bias (one by transformation) -might be used only if the flag "withBias" is turned on

Definition at line 595 of file TransformationLearner.h.

Referenced by applyTransformationOn(), buildLearnedParameters(), declareOptions(), generatorBuild(), initTransformsParameters(), MStepBias(), MStepTransformations(), and setTransformsParameters().

Definition at line 641 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepTransformations().

set to True, it modifies the way the transformation parameters are learned A term which represents diversity among transformations is added to the function to optimize : div_factor*sum(||theta_i - theta_j ||^2) The transformations can no more be updated all the same time We will need to define periods and offsets to know when to update them.

Definition at line 220 of file TransformationLearner.h.

Referenced by declareOptions(), mainLearnerBuild(), and MStep().

Definition at line 658 of file TransformationLearner.h.

Referenced by findBestWeightedNeighbors(), and mainLearnerBuild().

Definition at line 659 of file TransformationLearner.h.

Referenced by findBestWeightedNeighbors(), and mainLearnerBuild().

Definition at line 657 of file TransformationLearner.h.

Referenced by findBestWeightedNeighbors(), and mainLearnerBuild().

Definition at line 653 of file TransformationLearner.h.

Referenced by findNearestNeighbors(), and mainLearnerBuild().

Definition at line 652 of file TransformationLearner.h.

Referenced by findNearestNeighbors(), and mainLearnerBuild().

how the initial values of the parameters to learn are choosen?

Definition at line 225 of file TransformationLearner.h.

Referenced by declareOptions(), and initEStep().

Definition at line 235 of file TransformationLearner.h.

Referenced by declareOptions(), and EStep().

For a given training point, we do not consider all the possibilities for the hidden variables.

We approximate EM by using only the hidden variables with higher probability. That is, for each point in the training set, we keep a fixed number of hidden variables combinations, the most probable ones. We call that selection "large expection step". There are 2 versions, A and B. The following variables tells us when to perform each one. (see EStep() for more details)

Definition at line 234 of file TransformationLearner.h.

Referenced by declareOptions(), and EStep().

Definition at line 237 of file TransformationLearner.h.

Referenced by EStep().

Definition at line 236 of file TransformationLearner.h.

Referenced by declareOptions(), and EStep().

is the variance(precision) of the noise random variable learned or fixed ? (recall that the precision = 1/variance)

Definition at line 204 of file TransformationLearner.h.

Referenced by buildLearnedParameters(), declareOptions(), and mainLearnerBuild().

is the transformation distribution learned or fixed?

Definition at line 210 of file TransformationLearner.h.

Referenced by buildLearnedParameters(), declareOptions(), and mainLearnerBuild().

Definition at line 648 of file TransformationLearner.h.

Referenced by mainLearnerBuild().

Definition at line 649 of file TransformationLearner.h.

Referenced by mainLearnerBuild().

The following variable will be used to ensure p(x,v,t )>0 at the beginning (see implantation of randomReconstuctionWeight() for more details)

Definition at line 190 of file TransformationLearner.h.

Referenced by declareOptions(), and randomWeight().

Definition at line 667 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepBias().

Definition at line 664 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepBias().

Definition at line 665 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepBias().

Definition at line 668 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepBias().

Definition at line 666 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepBias().

Definition at line 671 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepNoiseVarianceMAP().

Definition at line 672 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepNoiseVarianceMAP().

Definition at line 670 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepNoiseVarianceMAP().

Definition at line 669 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepNoiseVarianceMAP().

Definition at line 662 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepTransformations().

Definition at line 663 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepTransformations().

Definition at line 661 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepTransformations().

Definition at line 660 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepTransformations().

Definition at line 673 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepTransformationDiv().

Definition at line 674 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepTransformationDiv().

Definition at line 675 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepTransformationDiv().

Definition at line 678 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepTransformationDiv().

Definition at line 679 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepTransformationDiv().

Definition at line 677 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepTransformationDiv().

Definition at line 676 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepTransformationDiv().

total number of combinations (x,v,t) keeped in the reconstruction set

Definition at line 616 of file TransformationLearner.h.

Referenced by buildLearnedParameters(), mainLearnerBuild(), MStepBias(), MStepTransformationDiv(), MStepTransformations(), MStepTransformDistributionMAP(), and smallEStep().

number of hidden variables combinations keeped for a specific target in the reconstruction set.

(Those combinations might be seen like reconstructions of the target)

Definition at line 613 of file TransformationLearner.h.

Referenced by findBestTargetReconstructionCandidates(), largeEStepA(), mainLearnerBuild(), MStepNoiseVarianceMAP(), normalizeTargetWeights(), returnNeighbors(), returnReconstructionCandidates(), returnReconstructions(), and seeTargetReconstructionSet().

Definition at line 644 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and MStepTransformDistributionMAP().

These 2 parameters have to be defined if the noise variance is learned using a MAP procedure.

We suppose that the prior distribution for the noise variance is a gamma distribution with parameters alpha and beta: p(x|alpha,beta)= x^(alpha-1)beta^(alpha)exp(-beta*x)/gamma(alpha) Note : if alpha = 1, beta=0, all the possibilities are equiprobable (no regularization effect)

Definition at line 252 of file TransformationLearner.h.

Referenced by declareOptions(), generatorBuild(), initNoiseVariance(), mainLearnerBuild(), and MStepNoiseVariance().

variance of the NOISE random variable.

(recall that this r.v. is normally distributed with mean 0). -if it is a learned parameter, will be considered as the initial value of the noise variance parameter. -if it is not well defined (<=0), it will be redefined using its prior distribution (Gamma).

Definition at line 287 of file TransformationLearner.h.

Referenced by buildLearnedParameters(), computeReconstructionWeight(), declareOptions(), generatePredictedFrom(), initNoiseVariance(), log_density(), MStepBias(), MStepNoiseVarianceMAP(), MStepTransformationDiv(), MStepTransformations(), and setNoiseVariance().

Definition at line 243 of file TransformationLearner.h.

Referenced by declareOptions(), mainLearnerBuild(), and MStep().

If the noise variance (precision) is learned, the following variables tells us when to update the noise variance in the maximization steps: (see MStep() for more details)

Definition at line 242 of file TransformationLearner.h.

Referenced by declareOptions(), mainLearnerBuild(), and MStep().

if we learn the noise variance, do we use the MAP estimator ?

Definition at line 207 of file TransformationLearner.h.

Referenced by buildLearnedParameters(), declareOptions(), and mainLearnerBuild().

if we learn the transformation distribution, do we use the MAP estimator ?

Definition at line 213 of file TransformationLearner.h.

Referenced by buildLearnedParameters(), declareOptions(), and mainLearnerBuild().

Definition at line 646 of file TransformationLearner.h.

Referenced by log_density(), mainLearnerBuild(), and smallEStep().

Definition at line 647 of file TransformationLearner.h.

Referenced by log_density(), mainLearnerBuild(), and smallEStep().

Definition at line 645 of file TransformationLearner.h.

Referenced by mainLearnerBuild(), and smallEStep().

Definition at line 650 of file TransformationLearner.h.

Referenced by seeTrainingPoint().

Definition at line 651 of file TransformationLearner.h.

Referenced by seeTrainingPoint().

Will be used to store a view on the reconstructionSet.

The view will consist in all the entries related to a specific target

Definition at line 633 of file TransformationLearner.h.

multinomial distribution for the transformation: (i.e.

probabilit of kth transformation = transformDistriibution[k]) (might be learned or fixed) -if it is a learned parameter, will be considered as the initial value or the transformation distribution -if it is not well defined (size, positivity, sum to 1), it will be redefined using its prior distribution (Dirichlet).

Definition at line 305 of file TransformationLearner.h.

Referenced by buildLearnedParameters(), computeReconstructionWeight(), declareOptions(), initTransformDistribution(), log_density(), MStepTransformDistributionMAP(), pickTransformIdx(), and setTransformDistribution().

This parameter have to be defined if the transformation distribution is learned using a MAP procedure.

We suppose that this distribution have a a multinomial form

Definition at line 267 of file TransformationLearner.h.

Referenced by declareOptions(), generatorBuild(), initTransformDistribution(), mainLearnerBuild(), and MStepTransformDistribution().

Definition at line 259 of file TransformationLearner.h.

Referenced by declareOptions(), mainLearnerBuild(), and MStep().

If the transformation distribution is learned, the following variables tells us when to update it in the maximization steps: (see MStep() for more details)

Definition at line 258 of file TransformationLearner.h.

Referenced by declareOptions(), mainLearnerBuild(), and MStep().

what is the global form of the transformation functions used?

Definition at line 196 of file TransformationLearner.h.

Referenced by applyTransformationOn(), declareOptions(), initTransformsParameters(), MStepTransformationDiv(), and MStepTransformations().

Definition at line 272 of file TransformationLearner.h.

Referenced by declareOptions(), mainLearnerBuild(), and MStep().

tells us when to update the transformation parameters

Definition at line 271 of file TransformationLearner.h.

Referenced by declareOptions(), mainLearnerBuild(), and MStep().

standard deviations for the transformation parameters:

Definition at line 625 of file TransformationLearner.h.

Referenced by generatorBuild(), initTransformsParameters(), and mainLearnerBuild().

set of transformations: mdxd matrix : -m = number of transformation,

-d = dimensionality of the input space -rows kd to kd + d (exclusively) = sub-matrix = parameters of the kth transformation (0<=k<m)

Definition at line 590 of file TransformationLearner.h.

Referenced by buildLearnedParameters(), declareOptions(), generatorBuild(), initTransformsParameters(), returnAllTransforms(), and setTransformsParameters().

variance on the transformation parameters (prior distribution = normal with mean 0)

Definition at line 290 of file TransformationLearner.h.

Referenced by declareOptions(), generatorBuild(), mainLearnerBuild(), MStepBias(), MStepTransformationDiv(), and MStepTransformations().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines