PLearn 0.1
|
Go to the source code of this file.
Namespaces | |
namespace | PLearn |
< for swap | |
Functions | |
int | PLearn::eigen_SymmMat (Mat &in, Vec &e_value, Mat &e_vector, int &n_evalues_found, bool compute_all, int nb_eigen, bool compute_vectors, bool largest_evalues) |
int | PLearn::eigen_SymmMat_decreasing (Mat &in, Vec &e_value, Mat &e_vector, int &n_evalues_found, bool compute_all, int nb_eigen, bool compute_vectors=true, bool largest_evalues=true) |
same as the previous call, but eigenvalues/vectors are sorted by largest firat (in decreasing order) | |
int | PLearn::matInvert (Mat &in, Mat &inverse) |
This function compute the inverse of a matrix. | |
int | PLearn::lapackSolveLinearSystem (Mat &At, Mat &Bt, TVec< int > &pivots) |
void | PLearn::solveLinearSystem (const Mat &A, const Mat &Y, Mat &X) |
for matrices A such that A.length() <= A.width(), find X s.t. | |
void | PLearn::solveTransposeLinearSystem (const Mat &A, const Mat &Y, Mat &X) |
for matrices A such that A.length() >= A.width(), find X s.t. | |
Mat | PLearn::solveLinearSystem (const Mat &A, const Mat &B) |
Vec | PLearn::solveLinearSystem (const Mat &A, const Vec &b) |
Returns solution x of Ax = b (same as above, except b and x are vectors) | |
Vec | PLearn::constrainedLinearRegression (const Mat &Xt, const Vec &Y, real lambda) |
void | PLearn::lapackCholeskyDecompositionInPlace (Mat &A, char uplo='L') |
Call LAPACK to perform in-place Cholesky Decomposition of a square SYMMETRIC matrix A. | |
void | PLearn::lapackCholeskySolveInPlace (Mat &A, Mat &B, bool B_is_column_major=false, char uplo='L') |
Call LAPACK to solve in-place a linear system given its previously-computed Cholesky decomposition. | |
Mat | PLearn::multivariate_normal (const Vec &mu, const Mat &A, int N) |
generate N vectors sampled from the normal with mean vector mu and covariance matrix A | |
Vec | PLearn::multivariate_normal (const Vec &mu, const Mat &A) |
generate a vector sampled from the normal with mean vector mu and covariance matrix A | |
Vec | PLearn::multivariate_normal (const Vec &mu, const Vec &e_values, const Mat &e_vectors) |
generate 1 vector sampled from the normal with mean mu and covariance matrix A = evectors * diagonal(e_values) * evectors' | |
void | PLearn::multivariate_normal (Vec &x, const Vec &mu, const Vec &e_values, const Mat &e_vectors, Vec &z) |
generate a vector x sampled from the normal with mean mu and covariance matrix A = evectors * diagonal(e_values) * evectors' (the normal(0,I) originally sampled to obtain x is stored in z). | |
void | PLearn::affineNormalization (Mat data, Mat W, Vec bias, real regularizer) |
real | PLearn::GCV (Mat X, Mat Y, real weight_decay, bool X_is_transposed, Mat *W) |
Compute the generalization error estimator called Generalized Cross-Validation (Craven & Wahba 1979), and the corresponding ridge regression weights in min ||Y - X*W'||^2 + weight_decay ||W||^2. | |
real | PLearn::GCVfromSVD (real n, real Y2minusZ2, Vec Z, Vec s) |
Estimator of generalization error estimator called Generalized Cross-Validation (Craven & Wahba 1979), computed from the SVD of the input matrix X in the ridge regression. | |
real | PLearn::ridgeRegressionByGCV (Mat X, Mat Y, Mat W, real &best_GCV, bool X_is_transposed=false, real initial_weight_decay_guess=-1, int explore_threshold=5, real min_weight_decay=0) |
Perform ridge regression WITH model selection (i.e. | |
real | PLearn::weightedRidgeRegressionByGCV (Mat X, Mat Y, Vec gamma, Mat W, real &best_gcv, real min_weight_decay=0) |
Similar to ridgeRegressionByGCV, but with support form sample weights gamma. |