Representation Learning Tutorial
U. Montreal
June 26th, 2012, Edinburgh, Scotland
The success of machine learning algorithms generally depends on data
representation, and we hypothesize that this is because different
representations can entangle and hide more or less the different
explanatory factors of variation behind the data. Although domain
knowledge can be used to help design representations, learning can also
be used, and the quest for AI is motivating the design of more powerful
representation-learning algorithms. We view the ultimate goal of these
algorithms as disentangling the unknown underlying factors of variation
that explain the observed data.This tutorial reviews the basics of
feature learning and deep learning, as well as recent work relating
these subjects to probabilistic modeling and manifold learning.
An objective is to raise questions and issues about the appropriate
objectives for learning good representations, for computing
representations (i.e., inference), and the geometrical connections
between representation learning, density estimation and manifold
learning.
Outline:
- Motivations and Scope
- Feature / Representation learning
- Distributed representations
- Exploiting unlabeled data
- Deep representations
- Multi-task / Transfer learning
- Invariance vs Disentangling
- Algorithms
- Probabilistic models and RBM variants
- Auto-encoder variants (sparse, denoising, contractive)
- Explaining away, sparse coding and Predictive Sparse Decomposition
- Deep variants
- Analysis, Issues and Practice
- Tips and tricks
- Partition function gradient
- Inference
- Mixing between modes
- Geometry and probabilistic interpretations of auto-encoders
- Open questions
Preliminary version of the slides (pdf)
Bibliographic references(pdf)