Graduate Summer School 2012 on deep learning and feature learning


Institute for Pure and Applied Mathematics (IPAM)



Representation Learning and Deep Learning


Yoshua Bengio

U. Montreal

July, 2012, UCLA


The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data.  Although domain knowledge can be used to help design representations, learning can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms. We view the ultimate goal of these algorithms as disentangling the unknown underlying factors of variation that explain the observed data.This tutorial reviews the basics of feature learning and deep learning, as well as recent work relating these subjects to probabilistic modeling and manifold learning.  An objective is to raise questions and issues about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.
These lectures include also an overview of the application of deep learning in Natural Language Processing, and a discussion of connections between the optimization issues of local minima in deep architectures and the evolution of culture, as playing a role to reduce that optimization difficulty.
Outline:
VIDEOS OF LECTURES
LARGE FILE: final version of the slides (pdf, 48M) (updated 20/07/2012, 10h15 PT)
MORE CONVENIENT - IN PARTS: Bibliographic references(pdf, 68k)