PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Protected Member Functions | Private Types
PLearn::ProjectionErrorVariable Class Reference

The first input is a set of n_dim vectors (possibly seen as a single vector of their concatenation) f_i, each in R^n The second input is a set of T vectors (possibly seen as a single vector of their concatenation) t_j, each in R^n The output is the following: sum_j min_{w_j} || t_j - sum_i w_{ji} f_i ||^2 where row w_j of w is optmized analytically and separately for each j. More...

#include <ProjectionErrorVariable.h>

Inheritance diagram for PLearn::ProjectionErrorVariable:
Inheritance graph
[legend]
Collaboration diagram for PLearn::ProjectionErrorVariable:
Collaboration graph
[legend]

List of all members.

Public Member Functions

 ProjectionErrorVariable ()
 Default constructor for persistence.
 ProjectionErrorVariable (Variable *input1, Variable *input2, int n=-1, bool normalize_by_neighbor_distance=true, bool use_subspace_distance=false, real norm_penalization=1.0, real epsilon=1e-6, real regularization=0, bool ordered_vectors=true)
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual ProjectionErrorVariabledeepCopy (CopiesMap &copies) const
virtual void build ()
 Post-constructor.
virtual void recomputeSize (int &l, int &w) const
 Recomputes the length l and width w that this variable should have, according to its parent variables.
virtual void fprop ()
 compute output given input
virtual void bprop ()
virtual void symbolicBprop ()
 compute a piece of new Var graph that represents the symbolic derivative of this Var

Static Public Member Functions

static string _classname_ ()
 ProjectionErrorVariable.
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

int n
bool use_subspace_distance
bool normalize_by_neighbor_distance
real norm_penalization
real epsilon
real regularization
bool ordered_vectors
int n_dim
int T
Vec S
Vec fw
Vec norm_err
Vec ww
Vec uu
Vec wwuu
Vec rhs
Vec Tu
Vec one_over_norm_T
Vec norm_f
Mat F
Mat TT
Mat dF
Mat Ut
Mat V
Mat B
Mat VVt
Mat A
Mat A11
Mat A12
Mat A21
Mat A22
Mat wwuuM
Mat FT
Mat FT1
Mat FT2
Mat fw_minus_t
Mat w

Static Public Attributes

static StaticInitializer _static_initializer_

Protected Member Functions

void build_ ()
 This does the actual building.

Private Types

typedef BinaryVariable inherited

Detailed Description

The first input is a set of n_dim vectors (possibly seen as a single vector of their concatenation) f_i, each in R^n The second input is a set of T vectors (possibly seen as a single vector of their concatenation) t_j, each in R^n The output is the following: sum_j min_{w_j} || t_j - sum_i w_{ji} f_i ||^2 where row w_j of w is optmized analytically and separately for each j.

Definition at line 59 of file ProjectionErrorVariable.h.


Member Typedef Documentation

Reimplemented from PLearn::BinaryVariable.

Definition at line 61 of file ProjectionErrorVariable.h.


Constructor & Destructor Documentation

PLearn::ProjectionErrorVariable::ProjectionErrorVariable ( ) [inline]

Default constructor for persistence.

Definition at line 80 of file ProjectionErrorVariable.h.

{}
PLearn::ProjectionErrorVariable::ProjectionErrorVariable ( Variable input1,
Variable input2,
int  n = -1,
bool  normalize_by_neighbor_distance = true,
bool  use_subspace_distance = false,
real  norm_penalization = 1.0,
real  epsilon = 1e-6,
real  regularization = 0,
bool  ordered_vectors = true 
)

Definition at line 91 of file ProjectionErrorVariable.cc.

References build_().

    : inherited(input1, input2, 1, 1), n(n_), use_subspace_distance(use_subspace_distance_), 
      normalize_by_neighbor_distance(normalize_by_neighbor_distance_), norm_penalization(norm_penalization_), 
      epsilon(epsilon_),  regularization(regularization_), ordered_vectors(ordered_vectors_)
{
    build_();
}

Here is the call graph for this function:


Member Function Documentation

string PLearn::ProjectionErrorVariable::_classname_ ( ) [static]

ProjectionErrorVariable.

Reimplemented from PLearn::BinaryVariable.

Definition at line 89 of file ProjectionErrorVariable.cc.

OptionList & PLearn::ProjectionErrorVariable::_getOptionList_ ( ) [static]

Reimplemented from PLearn::BinaryVariable.

Definition at line 89 of file ProjectionErrorVariable.cc.

RemoteMethodMap & PLearn::ProjectionErrorVariable::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::BinaryVariable.

Definition at line 89 of file ProjectionErrorVariable.cc.

bool PLearn::ProjectionErrorVariable::_isa_ ( const Object o) [static]

Reimplemented from PLearn::BinaryVariable.

Definition at line 89 of file ProjectionErrorVariable.cc.

Object * PLearn::ProjectionErrorVariable::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::Object.

Definition at line 89 of file ProjectionErrorVariable.cc.

StaticInitializer ProjectionErrorVariable::_static_initializer_ & PLearn::ProjectionErrorVariable::_static_initialize_ ( ) [static]

Reimplemented from PLearn::BinaryVariable.

Definition at line 89 of file ProjectionErrorVariable.cc.

void PLearn::ProjectionErrorVariable::bprop ( ) [virtual]

Implements PLearn::Variable.

Definition at line 367 of file ProjectionErrorVariable.cc.

References PLearn::TVec< T >::clear(), dF, PLearn::externalProductScaleAcc(), F, fw, fw_minus_t, PLearn::Variable::gradient, i, j, PLearn::multiplyAcc(), n_dim, norm_err, norm_penalization, normalize_by_neighbor_distance, one_over_norm_T, ordered_vectors, PLearn::substract(), T, TT, use_subspace_distance, w, and ww.

{
    // calcule dcost/F et incremente input1->matGadient avec cette valeur
    // keeping w fixed
    // 
    // IF use_subspace_distance
    //   dcost/dF = w (F'w - T'u)'
    //
    // ELSE IF ordered_vectors
    //   dcost_k/df_k = sum_j 2(sum_{i<=k} w_i f_i  - t_j) w_k/||t_j||
    // 
    // ELSE
    //   dcost/dfw = 2 (fw - t_j)/||t_j||
    //   dfw/df_i = w_i 
    //  so 
    //   dcost/df_i = sum_j 2(fw - t_j) w_i/||t_j||
    //
    // IF norm_penalization>0
    //   add the following to the gradient of f_i:
    //     norm_penalization*2*(||f_i||^2 - 1)*f_i
    // N.B. WE CONSIDER THE input2 (t_j's) TO BE FIXED AND DO NOT 
    // COMPUTE THE GRADIENT WRT to input2. IF THE USE OF THIS
    // OBJECT CHANGES THIS MAY HAVE TO BE REVISED.
    //

    if (use_subspace_distance)
    {
        externalProductScaleAcc(dF,ww,fw,gradient[0]);
        if (norm_penalization>0)
            for (int i=0;i<n_dim;i++)
            {
                Vec df_i = dF(i); // n-vector
                multiplyAcc(df_i, F(i), gradient[0]*norm_penalization*2*norm_err[i]);
            }
    }
    else if (ordered_vectors)
    {
        for (int j=0;j<T;j++)
        {
            fw.clear();
            Vec wj = w(j);
            Vec fw_minus_tj = fw_minus_t(j); // n-vector
            Vec tj = TT(j);
            for (int k=0;k<n_dim;k++)
            {
                Vec f_k = F(k); // n-vector
                Vec df_k = dF(k); // n-vector
                multiplyAcc(fw,f_k,wj[k]);
                substract(fw,tj,fw_minus_tj);
                if (normalize_by_neighbor_distance)
                    multiplyAcc(df_k,fw_minus_tj,gradient[0] * wj[k] * 2 * one_over_norm_T[j]/real(T));
                else
                    multiplyAcc(df_k,fw_minus_tj,gradient[0] * wj[k] * 2/real(T));
            }
        }
    }
    else
    {
        for (int j=0;j<T;j++)
        {
            Vec fw_minus_tj = fw_minus_t(j); // n-vector
            Vec wj = w(j);
            for (int i=0;i<n_dim;i++)
            {
                Vec df_i = dF(i); // n-vector
                if (normalize_by_neighbor_distance)
                    multiplyAcc(df_i, fw_minus_tj, gradient[0] * wj[i]*2*one_over_norm_T[j]/real(T));
                else
                    multiplyAcc(df_i, fw_minus_tj, gradient[0] * wj[i]*2/real(T));
                if (norm_penalization>0)
                    multiplyAcc(df_i, F(i), gradient[0]*norm_penalization*2*norm_err[i]/real(T));
            }
        }
    }
}

Here is the call graph for this function:

void PLearn::ProjectionErrorVariable::build ( ) [virtual]

Post-constructor.

The normal implementation should call simply inherited::build(), then this class's build_(). This method should be callable again at later times, after modifying some option fields to change the "architecture" of the object.

Reimplemented from PLearn::BinaryVariable.

Definition at line 104 of file ProjectionErrorVariable.cc.

References PLearn::BinaryVariable::build(), and build_().

Here is the call graph for this function:

void PLearn::ProjectionErrorVariable::build_ ( ) [protected]

This does the actual building.

Reimplemented from PLearn::BinaryVariable.

Definition at line 111 of file ProjectionErrorVariable.cc.

References A, A11, A12, A21, A22, B, dF, F, PLearn::TVec< T >::fill(), FT, FT1, FT2, fw, fw_minus_t, PLearn::BinaryVariable::input1, PLearn::BinaryVariable::input2, PLearn::Var::length(), n, n_dim, norm_err, norm_f, norm_penalization, one_over_norm_T, ordered_vectors, PLERROR, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), rhs, PLearn::TMat< T >::subMat(), PLearn::TVec< T >::subVec(), T, PLearn::TVec< T >::toMat(), TT, Tu, use_subspace_distance, Ut, uu, V, VVt, w, PLearn::Var::width(), ww, wwuu, and wwuuM.

Referenced by build(), and ProjectionErrorVariable().

{
    if (input1 && input2) {
        if ((input1->length()==1 && input1->width()>1) || 
            (input1->width()==1 && input1->length()>1))
        {
            if (n<0) PLERROR("ProjectionErrorVariable: Either the input should be matrices or n should be specified\n");
            n_dim = input1->size()/n;
            if (n_dim*n != input1->size())
                PLERROR("ProjectErrorVariable: the first input size should be an integer multiple of n");
        }
        else 
            n_dim = input1->length();
        if ((input2->length()==1 && input2->width()>1) || 
            (input2->width()==1 && input2->length()>1))
        {
            if (n<0) PLERROR("ProjectionErrorVariable: Either the input should be matrices or n should be specified\n");
            T = input2->size()/n;
            if (T*n != input2->size())
                PLERROR("ProjectErrorVariable: the second input size should be an integer multiple of n");
        }
        else 
            T = input2->length();

        F = input1->value.toMat(n_dim,n);
        dF = input1->gradient.toMat(n_dim,n);
        TT = input2->value.toMat(T,n);
        if (n<0) n = input1->width();
        if (input2->width()!=n)
            PLERROR("ProjectErrorVariable: the two arguments have inconsistant sizes");
        if (n_dim>n)
            PLERROR("ProjectErrorVariable: n_dim should be less than data dimension n");
        if (!use_subspace_distance)
        {
            if (ordered_vectors)
            {
                norm_f.resize(n_dim);
            }
            else
            {
                V.resize(n_dim,n_dim);
                Ut.resize(n,n);
                B.resize(n_dim,n);
                VVt.resize(n_dim,n_dim);
            }
            fw_minus_t.resize(T,n);
            w.resize(T,n_dim);
            one_over_norm_T.resize(T);
        }
        else 
        {
            wwuu.resize(n_dim+T);
            ww = wwuu.subVec(0,n_dim);
            uu = wwuu.subVec(n_dim,T);
            wwuuM = wwuu.toMat(1,n_dim+T);
            rhs.resize(n_dim+T);
            rhs.subVec(0,n_dim).fill(-1.0);
            A.resize(n_dim+T,n_dim+T);
            A11 = A.subMat(0,0,n_dim,n_dim);
            A12 = A.subMat(0,n_dim,n_dim,T);
            A21 = A.subMat(n_dim,0,T,n_dim);
            A22 = A.subMat(n_dim,n_dim,T,T);
            Tu.resize(n);
            FT.resize(n_dim+T,n);
            FT1 = FT.subMat(0,0,n_dim,n);
            FT2 = FT.subMat(n_dim,0,T,n);
            Ut.resize(n,n);
            V.resize(n_dim+T,n_dim+T);
        }
        fw.resize(n);
        if (norm_penalization>0)
            norm_err.resize(n_dim);
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

string PLearn::ProjectionErrorVariable::classname ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 89 of file ProjectionErrorVariable.cc.

static const PPath& PLearn::ProjectionErrorVariable::declaringFile ( ) [inline, static]

Reimplemented from PLearn::BinaryVariable.

Definition at line 83 of file ProjectionErrorVariable.h.

:
    void build_();
ProjectionErrorVariable * PLearn::ProjectionErrorVariable::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::BinaryVariable.

Definition at line 89 of file ProjectionErrorVariable.cc.

void PLearn::ProjectionErrorVariable::fprop ( ) [virtual]

compute output given input

Implements PLearn::Variable.

Definition at line 193 of file ProjectionErrorVariable.cc.

References A11, A12, A22, B, PLearn::TMat< T >::clear(), PLearn::TVec< T >::clear(), PLearn::dot(), PLearn::endl(), epsilon, F, FT, FT1, FT2, fw, fw_minus_t, i, j, PLearn::lapackSVD(), PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), PLearn::multiply(), n, n_dim, PLearn::norm(), norm_err, norm_f, norm_penalization, normalize_by_neighbor_distance, one_over_norm_T, ordered_vectors, PLearn::pownorm(), PLearn::product(), PLearn::productAcc(), PLearn::productTranspose(), regularization, PLearn::TMat< T >::resize(), S, PLearn::substract(), PLearn::sum(), PLearn::sumsquare(), T, PLearn::transposeProduct(), TT, Tu, use_subspace_distance, Ut, uu, V, PLearn::Variable::value, w, PLearn::TMat< T >::width(), ww, and wwuu.

{
    // Let F the input1 matrix with rows f_i.
    // IF use_subspace_distance THEN
    //  We need to solve the system
    //    | FF'  -FT'| |w|   | 1 |
    //    |          | | | = |   |
    //    |-TF'   TT'| |u|   | 0 |
    //  in (w,u), and then scale both down by ||w|| so as to enforce ||w||=1.
    //
    // ELSE IF !ordered_vectors
    //  We need to solve the system 
    //     F F' w_j = F t_j
    //  for each t_j in order to find the solution w of
    //    min_{w_j} || t_j - sum_i w_{ji} f_i ||^2
    //  for each j. Then sum over j the above square errors.
    //  Let F' = U S V' the SVD of F'. Then
    //    w_j = (F F')^{-1} F t_j = (V S U' U S V')^{-1} F t_j = V S^{-2} V' F t_j.
    //  Note that we can pre-compute
    //    B = V S^{-2} V' F = V S^{-1} U'
    //  and
    //    w_j = B t_j is our solution.
    // ELSE (ordered_vectors && !use_subspace_distance)
    //  for each j
    //   for each k
    //     w_{jk} = (t_j . f_k - sum_{i<k} w_i f_i . f_k)/||f_k||^2
    //  cost = sum_j || t_j - sum_i w_i f_i||^2 / ||t_j||^2
    // ENDIF
    //
    // if  norm_penalization>0 then also add the following term:
    //   norm_penalization * sum_i (||f_i||^2 - 1)^2
    //
    real cost = 0;
    if (use_subspace_distance)
    {
        // use SVD of (F' -T')
        FT1 << F;
        multiply(FT2,TT,static_cast<real>(-1.0));
        lapackSVD(FT, Ut, S, V);
        wwuu.clear();//
        for (int k=0;k<S.length();k++)
        {
            real s_k = S[k];
            real sv = s_k+ regularization;
            real coef = 1/(sv * sv);
            if (s_k>epsilon) // ignore the components that have too small singular value (more robust solution)
            {
                real sum_first_elements = 0;
                for (int j=0;j<n_dim;j++) 
                    sum_first_elements += V(j,k);
                for (int i=0;i<n_dim+T;i++)
                    wwuu[i] += V(i,k) * sum_first_elements * coef;
            }
        }

        static bool debugging=false;
        if (debugging)
        {
            productTranspose(A11,F,F);
            productTranspose(A12,F,TT);
            A12 *= -1.0;
            Vec res(ww.length());
            product(res,A11,ww);
            productAcc(res,A12,uu);
            res -= static_cast<real>(1.0);
            cout << "norm of error in w equations: " << norm(res) << endl;
            Vec res2(uu.length());
            transposeProduct(res2,A12,ww);
            productTranspose(A22,TT,TT);
            productAcc(res2,A22,uu);
            cout << "norm of error in u equations: " << norm(res2) << endl;
        }
        // scale w and u so that ||w|| = 1
        real wnorm = sum(ww); // norm(ww);
        wwuu *= 1.0/wnorm;

        // compute the cost = ||F'w - T'u||^2
        transposeProduct(fw,F,ww);
        transposeProduct(Tu,TT,uu);
        fw -= Tu;
        cost = pownorm(fw);
    }
    else // PART THAT IS REALLY USED STARTS HERE
        if (ordered_vectors)
        {
            // compute 1/||f_k||^2 into norm_f
            for (int k=0;k<n_dim;k++)
            {
                Vec fk = F(k);
                norm_f[k] = 1.0/pownorm(fk);
            }
            for(int j=0; j<T;j++)
            {
                Vec tj = TT(j);
                Vec wj = w(j);
                // w_{jk} = (t_j . f_k - sum_{i<k} w_i f_i . f_k)/||f_k||^2            
                for (int k=0;k<n_dim;k++)
                {
                    Vec fk = F(k);
                    real s = dot(tj,fk); 
                    for (int i=0;i<k;i++)
                        s -= wj[i] * dot(F(i),fk);
                    wj[k] = s * norm_f[k];
                }
                transposeProduct(fw, F, wj); // fw = sum_i w_ji f_i = z_m
                Vec fw_minus_tj = fw_minus_t(j);
                substract(fw,tj,fw_minus_tj); // -z_n = z_m - z
                if (normalize_by_neighbor_distance) // THAT'S THE ONE WHICH WORKS WELL:
                {
                    one_over_norm_T[j] = 1.0/pownorm(tj); // = 1/||z||
                    cost += sumsquare(fw_minus_tj)*one_over_norm_T[j]; // = ||z_n||^2 / ||z||^2
                }
                else
                    cost += sumsquare(fw_minus_tj);
            }
        }
        else
        {
            static Mat F_copy;
            F_copy.resize(F.length(),F.width());
            F_copy << F;
            // N.B. this is the SVD of F'
            lapackSVD(F_copy, Ut, S, V);
            B.clear();
            for (int k=0;k<S.length();k++)
            {
                real s_k = S[k];
                if (s_k>epsilon) // ignore the components that have too small singular value (more robust solution)
                { 
                    s_k += regularization;
                    real coef = 1/s_k;
                    for (int i=0;i<n_dim;i++)
                    {
                        real* Bi = B[i];
                        for (int j=0;j<n;j++)
                            Bi[j] += V(i,k)*Ut(k,j)*coef;
                    }
                }
            }
            //  now we have B, we can compute the w's and the cost
            for(int j=0; j<T;j++)
            {
                Vec tj = TT(j);

                Vec wj = w(j);
                product(wj, B, tj); // w_j = B * t_j = projection weights for neighbor j
                transposeProduct(fw, F, wj); // fw = sum_i w_ji f_i = z_m

                Vec fw_minus_tj = fw_minus_t(j);
                substract(fw,tj,fw_minus_tj); // -z_n = z_m - z
                if (normalize_by_neighbor_distance) // THAT'S THE ONE WHICH WORKS WELL:
                {
                    one_over_norm_T[j] = 1.0/pownorm(tj); // = 1/||z||
                    cost += sumsquare(fw_minus_tj)*one_over_norm_T[j]; // = ||z_n||^2 / ||z||^2
                }
                else
                    cost += sumsquare(fw_minus_tj);
            }
        }
    if (norm_penalization>0)
    {
        real penalization=0;
        for (int i=0;i<n_dim;i++)
        {
            Vec f_i = F(i);
            norm_err[i] = pownorm(f_i)-1;
            penalization += norm_err[i]*norm_err[i];
        }
        cost += norm_penalization*penalization;
    }
    value[0] = cost/real(T);
}

Here is the call graph for this function:

OptionList & PLearn::ProjectionErrorVariable::getOptionList ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 89 of file ProjectionErrorVariable.cc.

OptionMap & PLearn::ProjectionErrorVariable::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 89 of file ProjectionErrorVariable.cc.

RemoteMethodMap & PLearn::ProjectionErrorVariable::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 89 of file ProjectionErrorVariable.cc.

void PLearn::ProjectionErrorVariable::recomputeSize ( int l,
int w 
) const [virtual]

Recomputes the length l and width w that this variable should have, according to its parent variables.

This is used for ex. by sizeprop() The default version stupidly returns the current dimensions, so make sure to overload it in subclasses if this is not appropriate.

Reimplemented from PLearn::Variable.

Definition at line 187 of file ProjectionErrorVariable.cc.

{
    len = 1;
    wid = 1;
}
void PLearn::ProjectionErrorVariable::symbolicBprop ( ) [virtual]

compute a piece of new Var graph that represents the symbolic derivative of this Var

Reimplemented from PLearn::Variable.

Definition at line 444 of file ProjectionErrorVariable.cc.

References PLERROR.

{
    PLERROR("Not implemented");
}

Member Data Documentation

Reimplemented from PLearn::BinaryVariable.

Definition at line 83 of file ProjectionErrorVariable.h.

Definition at line 74 of file ProjectionErrorVariable.h.

Referenced by build_().

Definition at line 74 of file ProjectionErrorVariable.h.

Referenced by build_(), and fprop().

Definition at line 74 of file ProjectionErrorVariable.h.

Referenced by build_(), and fprop().

Definition at line 74 of file ProjectionErrorVariable.h.

Referenced by build_().

Definition at line 74 of file ProjectionErrorVariable.h.

Referenced by build_(), and fprop().

Definition at line 74 of file ProjectionErrorVariable.h.

Referenced by build_(), and fprop().

Definition at line 74 of file ProjectionErrorVariable.h.

Referenced by bprop(), and build_().

Definition at line 68 of file ProjectionErrorVariable.h.

Referenced by fprop().

Definition at line 74 of file ProjectionErrorVariable.h.

Referenced by bprop(), build_(), and fprop().

Definition at line 74 of file ProjectionErrorVariable.h.

Referenced by build_(), and fprop().

Definition at line 74 of file ProjectionErrorVariable.h.

Referenced by build_(), and fprop().

Definition at line 74 of file ProjectionErrorVariable.h.

Referenced by build_(), and fprop().

Definition at line 73 of file ProjectionErrorVariable.h.

Referenced by bprop(), build_(), and fprop().

Definition at line 75 of file ProjectionErrorVariable.h.

Referenced by bprop(), build_(), and fprop().

Definition at line 64 of file ProjectionErrorVariable.h.

Referenced by build_(), and fprop().

Definition at line 71 of file ProjectionErrorVariable.h.

Referenced by bprop(), build_(), and fprop().

Definition at line 73 of file ProjectionErrorVariable.h.

Referenced by bprop(), build_(), and fprop().

Definition at line 73 of file ProjectionErrorVariable.h.

Referenced by build_(), and fprop().

Definition at line 67 of file ProjectionErrorVariable.h.

Referenced by bprop(), build_(), and fprop().

Definition at line 66 of file ProjectionErrorVariable.h.

Referenced by bprop(), and fprop().

Definition at line 73 of file ProjectionErrorVariable.h.

Referenced by bprop(), build_(), and fprop().

Definition at line 70 of file ProjectionErrorVariable.h.

Referenced by bprop(), build_(), and fprop().

Definition at line 69 of file ProjectionErrorVariable.h.

Referenced by fprop().

Definition at line 73 of file ProjectionErrorVariable.h.

Referenced by build_().

Definition at line 73 of file ProjectionErrorVariable.h.

Referenced by fprop().

Definition at line 72 of file ProjectionErrorVariable.h.

Referenced by bprop(), build_(), and fprop().

Definition at line 74 of file ProjectionErrorVariable.h.

Referenced by bprop(), build_(), and fprop().

Definition at line 73 of file ProjectionErrorVariable.h.

Referenced by build_(), and fprop().

Definition at line 65 of file ProjectionErrorVariable.h.

Referenced by bprop(), build_(), and fprop().

Definition at line 74 of file ProjectionErrorVariable.h.

Referenced by build_(), and fprop().

Definition at line 73 of file ProjectionErrorVariable.h.

Referenced by build_(), and fprop().

Definition at line 74 of file ProjectionErrorVariable.h.

Referenced by build_(), and fprop().

Definition at line 74 of file ProjectionErrorVariable.h.

Referenced by build_().

Definition at line 76 of file ProjectionErrorVariable.h.

Referenced by bprop(), build_(), and fprop().

Definition at line 73 of file ProjectionErrorVariable.h.

Referenced by bprop(), build_(), and fprop().

Definition at line 73 of file ProjectionErrorVariable.h.

Referenced by build_(), and fprop().

Definition at line 74 of file ProjectionErrorVariable.h.

Referenced by build_().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines