Yoshua Bengio  


Yoshua Bengio
Full Professor
Department of Computer Science and Operations Research
Canada Research Chair in Statistical Learning Algorithms
<my first name> (dot) <my last name> (at sign) umontreal (dot) ca

 

 

Research

My long-term goal is to understand intelligence; understanding the underlying principles would deliver artificial intelligence, and I believe that learning algorithms are essential in this quest.

Machine learning algorithms attempt to endow machines with the ability to capture operational knowledge through examples, e.g., allowing a machine to classify or predict correctly in new cases. Machine learning research has been extremely successful in the past two decades and is now applied in many areas of science and technology, some well known examples including web search engines, natural language translation, speech recognition, machine vision, and data-mining. Yet, machines still seem to fall short of even mammal-level intelligence in many respects. One of the remaining frontiers of machine learning is the difficulty of learning the kind of complicated and highly-varying functions that are necessary to perform machine vision or natural language processing tasks at a level comparable to humans (even a 2-year old).

See my lab's long-term vision web page for a broader introduction.
An introductory discussion of recent and ongoing research is below. See the lab's publications site for a downloadable and complete bibliographic list of my papers.

See this page for recent research highlights and selected papers.

     

Much of my fundamental research is funded by the National Sciences and Engineering Research Council of Canada(NSERC), the Network of Centers of Excellence on Mathematics of Information Theory and Complex Systems (MITACS), and the Canada Research Chairs.


Below you will find a brief description of selected topics, including pointers to selected recent publications. More details can be found in my publications site:


The Need for Deep Architectures

As described in an 2007 NSF report on Future Challenges for the Science and Engineering of Learning, one of the missing ingredients is depth. Deep learning methods aim at learning of feature hierarchies with features from higher-levels of the hierarchy formed by the composition of lower level features. Automatically learning features at multiple levels of abstraction allows a system to learn complex functions mapping the input to the output directly from data, without depending completely on human crafted features. This is especially important for higher-level abstractions, which humans often do not know how to specify explicitly. Complexity theory theorems suggest that one of the missing ingredients in current learning algorithms is depth of architecture (the number of levels of composition in the learned function, e.g. number of layers of a neural network), illustrated below

 Illustration of circuit depth, for two circuits built from different element sets.
On the left the circuit implements x*sin(a*x+b).  Depth is 4.
On the right the circuit implements a neural nework with three layers. Depth is 3.


The most suggestive result is that a depth-k architecture could represent a particular function efficiently, a depth (k-1) architecture might need an exponential number of components, and thus a huge number of training examples. See these papers for a broader discussion:

   

 

Learning Algorithms for Deep Architectures

Until 2006, attempts at training deep architectures (e.g. multi-layer neural networks with more than 2 hidden layers) were unsuccessful. Since then, several strategies have been proposed and successfully demonstrated to train deeper architectures. A prominent example of such algorithms are the Deep Belief Networks, which are graphical models structured as stochastic neural networks with many layers, and that can be trained in an unsupervised way, one layer at a time.

Training and initializing each layer of a 3-layer Deep Belief Network as a Restricted Boltzmann Machine (RBM).
The RBM is an unsupervised model of its input, which  is the output of the previous layer. Each layer represents
a higher level abstraction, a more non-linear (stochastic) transformation of the raw input x.

Each layer represents a factor model for the layer below, and altogether we obtain a highly non-linear factor model. Since 2006, the number of papers on deep architectures has grown very quickly, several other algorithms for deep architectures have been proposed, exciting experimental results have been obtained on a wide variety of tasks, and funding agencies are starting to perceive the importance of this research question. Hinton et al introduced Deep Belief Networks in a 2006 Neural Computation paper. I wrote the following review paper  on the motivations and algorithms for deep architectures:
From our lab, these other papers can be considered follow-up work on Hinton et al's foundational work on RBMs and DBNs:



The Need for Non-Local Generalization and Distributed Representations

In addition to depth of architecture, we have found that another ingredient is crucial: distributed representations. We and others have found that most non-parametric learning algorithms suffer from the so-called curse of dimensionality. We would prefer to call it the curse of highly-varying functions or the curse of locality. That curse occurs when the only way a learning algorithm generalizes to a new case x is by exploiting only a raw notion of similarity (such as Euclidean distance) between the cases. This is typically done by the learner looking in its training examples for cases that are close to x according to some similarity measure. One can often interpret what these algorithms do as some kind of interpolation between the neighboring examples. Imagine trying to approximate a function by many small linear or constant pieces. We need at least one example for each piece. We can figure out what each piece should look like by looking mostly at the examples in the neighborhood of each piece. If the target function has a lot of variations, we'll need correspondingly many training examples. In dimension d (or on a manifold of dimension d), the number of variations may grow exponentially with d, hence the number of required examples. However, if we are lucky, we may still obtain good results when we are trying to discriminate between two highly complicated regions (manifolds), e.g. associated with two classes of objects. Even though each manifold may have many variations, they might be separable by a smooth (maybe even linear) decision surface. That is the situation where local non-parametric algorithms work well. They also work comparatively well when the distribution of examples is very noisy, because it is difficult to capture much signal then (and a smooth predictor is pretty much the best one can do).

Distributed representations are transformations of the data that compactly capture many different factors of variations present in the data. Because many examples can inform us about each of these factors, and because each factor may tell us something about examples that are very far from the training examples, it is possible to generalize non-locally, and escape the curse of dimensionality. The simplest distributed representation is a linear transformation of the input example (e.g., learned by Principal Components Analysis, Partial Least Squares, Independent Component Analysis or other more recent techniques, often called Factor Models). To represent more complicated functions, we need non-linear factor models, such as the Restricted Boltzmann Machine and Auto-Encoder introduced as building blocks of deep architectures.



Generalizing locally is easy: it amounts to interpolating between neighboring training examples (starred points, above). In fact, local learning algorithms (generally kernel machines) work very well in low dimension or when the function to capture can be described with a smooth manifold. But when the target function has many ups and downs, one needs at least that many examples.
However, in high dimension, the neighborhood becomes exponentially large, and one requires an exponential number of training examples to cover it. To cover and discriminate among  N regions in input space, one would need O(N) examples with a local learning algorithm, but N can grow to the power fo the space dimension.
Instead, input variations can sometimes be compactly described using a distributed representation. Above, each hyper-plane creates a 2-way partition of the space, and K hyper-planes (= K linear classifiers = K hidden units of a neural net) can capture and discriminate among 2**K regions in input space.

This is an early paper on fighting the curse of dimensionality in high-dimensional discrete distributions using neural networks:
And this one an attempt to introduce a non-local component in a non-parametric density estimation algorithm by using a neural net to predict the manifold structure at each point x, as a function of x:
  • Yoshua Bengio, Hugo Larochelle and Pascal Vincent, Non-Local Manifold Parzen Windows, in: Advances in Neural Information Processing Systems 18 (NIPS'2005), MIT Press, 2006.
Our more recent work has focussed on demonstrating mathematically the limitations of modern non-parametric learning algorithms such as SVMs, Gaussian Processes, decision trees, and manifold learning algorithms based on the neighborhood graph:





Strategies for Non-Convex Optimization of Deep Architectures


One basic hypothesis of our work is that the main stumbling block explaining past failures at training deep architectures is due to an optimization difficulty: learning with deep architectures involves a difficult optimization problem, whose exact solution is intractable. The optimization problem is made difficult because of the presence of local minima and plateaus. However, it is encouraging to note that animals and humans seem to apply sub-optimal but wholly adequate strategies (that we have yet to fully elucidate!). The optimization problem is made difficult because of the presence of local minima and plateaus. The working strategies we know of involve at least one of these ingredients:
  1. guiding the optimization or
  2. the exploration of many solutions (regions in function space) in parallel.
Since deeper networks are more difficult to train than shallow ones, it is not surprising to find that many of the 'guiding' strategies can be seen as giving hints to the hidden layers, to help them move to regions near good solutions to the learning task. It is conjectured that in order to generalize to a wide array of tasks (e.g. all related to vision), unsupervised and semi-supervised learning (using mostly as data examples that have not been hand-labeled) is crucial. One way to unify many successful strategies for training deep architectures is to observe that they exploit the idea that one can get near a good solution to the optimization problem by considering a smoother training criterion, and gradually making it closer to the target criterion of interest, tracking a local minimum along the way (this is called a continuation method, illustrated below).
 


Our research objective is to further improve our understanding of these algorithms and to exploit it to develop new strategies for training deep architectures, taking inspiration from the natural world (how humans manage to learn such complicated tasks). An example of inspiration from the natural world comes from the way in which children develop and learn new skills. Whereas machine learning algorithms are typically provided with a single homogeneous training set of examples, children learn in stages, moving to a next stage only after having mastered the previous stage, and they care mostly about examples illustrating the concepts that are on the frontier of their understanding of the world. That is why schooling is normally organized in the form of a curriculum. This principle is also called shaping in the world of animal training, where a sequence of gradually more difficult tasks is set up for an animal.

This is a new research thread in my lab, and the only published work discussing these ideas are the following. First, we did experiments that confirm that the main issue with the traditional optimization method for deep neural networks is with the difficulty in optimizing the lower layers. We also show the importance of the unsupervised layer-wise inititialization (by opposition to supervised):
In this paper we summarize several of the above ideas on strategies that have been or may be employed to circumvent the optimization difficulty of deep architectures:


Learning sequential dependencies and language modeling

One of the highest impact results I obtained was about the difficulty of learning sequential dependencies, either in recurrent neural networks or in dynamical graphical models (such as Hidden Markov Models). The paper below suggests that with parametrized dynamical systems (such as a recurrent neural network), the error gradient propagated through many time steps is a poor source of information for learning to capture statistical dependencies that are temporally remote. The mathematical result is that either information is not easily transmitted (it is lost exponentially fast when trying to propagate it from the past to the future through a context variable, or it is vulnerable to perturbations and noise), or the gradients relating temporally remote events becomes exponentially small for larger temporal differences.
Similar results were shown for HMMs there:
Is there any hope? Humans seem to manage to learn through many time scales. This has inspired this paper:

Our recent work with unsupervised learning for deep architectures, Mnih and Hinton's (ICML'2007) work on temporal RBMs, unpublished work by James Bergstra (thesis proposal), and unpublished work by Ilya Sutskever (U. Toronto), all suggest that there may be ways around the issues introduced in the above papers in the mid-90's. Exploring these potential solutions is one of the current undertakings in my lab.
 j
Sequential dependencies are particularly important for modeling the statistical structure of language. We have been working on such models and continue to do so. Our early work was meant to show the advantage of using distributed representations to beat state-of-the-art statistical language models (smoothed n-grams):
  • Yoshua Bengio, Réjean Ducharme, Pascal Vincent and Christian Jauvin, A Neural Probabilistic Language Model, in: Journal of Machine Learning Research, volume 3, pages 1137-1155, 2003.
This work has been followed up by many, as discussed in this scholarpedia survey article:
To speed-up these models, several approaches were explored:



The Baby AI Project

The Baby AI project is a way to make concrete the above research, with the additional insight that a major source of data about "the world of humans" comes from human-generated productions such as videos, television shows, or web pages, and that images (or image sequences) and natural language (in the form of textual captions or of sound) are the main modalities involved. In addition, any AI will presumably communicate with humans through these modalities, an AI will need to master these modalities. Furthermore, it seems obvious these modalities reinforce each other in the process of extracting the regularities we try to learn.


The objective is to train our Baby AI with a series of increasingly complex scenes illustrating gradually more abstract concepts, rooted in the main modalities human use to communicate (images, video, language).

Our research suggests that guiding the optimization is crucial to allow learning of complex abstractions.

In the short term we want to explore the limitations of current algorithms for deep architectures through a series of gradually more complex artificial worlds from which we generate examples. There are two advantages to this "starting small" strategy. Firstly, it is easier to understand what works and does not work when the setting and the concepts to learn are simpler. Second, once our algorithms have mastered a set of concepts, it makes sense to apply them to more complex ones and we can even initialize the new models using the models trained on the simpler concepts. In the medium term, we would like to add to virtual data (for which we know the underlying true semantics) unlabeled data from a very rich source such as TV programs.
Some of the basic research questions that we explore in this setting are the following:
  • Learning features reduces the need for human effort in crafting useful representations of the data. Researchers in fields such as computer vision have spent considerable energy engineering useful feature sets for various tasks and as new tasks emerge more effort will be required to devise new ones. The demonstrated ability of deep learning methods to automatically learn feature-sets from data has the potential to greatly expand the rate of progress in existing applications and could dramatically increase the rate of emergence of new applications for machine learning. We want to explore more learning algorithms that extract explanatory factors or features, as they are important building blocks.
  • Learning features from vast quantities of data has the potential to develop features of greater complexity (at higher levels of the hierarchy) than is achievable by human engineering. These features, reflecting invariances extant in the data, can increase the performance and robustness of a system that uses these features, as has already been confirmed by us and others, not only in classification tasks, but also in regression, dimensionality reduction, modeling textures, language modeling, information retrieval, robotics, and collaborative filtering.
  • Context representation: we are trying to generalize the deep architectures so they represent and learn the dynamics, the context; this is related to older research on long term dependencies and about generalizing probabilistic graphical models with distributed representations.
  • Many levels of abstraction, temporal hierarchy: the need to represent events at many time scales, with finer scales defining hierarchically what happens at coarser scales.
  • Multi-modality with partial observability: in some cases we only observe images, in others only text and sound, etc. We would like our algorithms to be able to infer "explanations" about the input that can take into account the modalities that are available in each particular case. This implies forward propagating paths for each modality that can be disjoint or connected, according to the need.
  • Asynchronicity: the image and text streams are completely asynchronous. How to unify them in the perspective of learning the joint between the two.
  • Computational efficiency: the images coming from a video can be quite big. So is the vocabulary. We would like to process them in a reasonable time, so the learning time per frame is of the order of the inter-frame time. This might imply parallelism, or implementation optimizations, or special hardware.