r32 - 20 Aug 2008 - 15:19:48 - YoshuaBengioYou are here: TWiki >  Public Web  > ResearchVision

Abstract

This page introduces motivations, goals, and research directions for the UdeM machine learning lab (LISA). We start from the goal of understanding intelligence, defining intelligence in terms of operational knowledge. We argue that learning is necessary to acquire the kind of complex knowledge needed for AI, and look at the ingredients missing in current mainstream machine learning research for learning such complicated tasks. One of these ingredients is depth of architecture: computing and representing information at many levels, and composing reusable computations. Whereas the difficulty of optimizing deep architectures seemed unsurmountable before 2006, we now have understood principles and learning algorithms for deep architectures that beat the state of the art on many tasks. But we believe that much more should be done and can be done in this direction, and we seek inspiration from what is known of natural intelligence.

A longer and more technical write-up on many of the subjects discussed here can be found in this technical report. AI-brain.jpg

Long-Term Goal - machine learning for AI

Our long-term goal is to understand intelligence, which would deliver AI, and we believe that learning algorithms are essential.

What is Intelligence?

We understand intelligence in terms of operational knowledge: knowledge that is represented implicitly or explicitly in such a way that it provides an agent with the ability to perform some tasks. More intelligent agents can cope with a broader set of tasks. AI-vs-natural-stupidity.jpg Complete understanding of the world (and infinite computation power) would yield the ability to take the optimal decisions on all tasks. Conversely, the ability to respond correctly to any question regarding a particular environment is equivalent to complete understanding of that environment. Different agents may have incomparable intelligence (in the sense that one can do things that the other can't), so the whole notion of 'measuring' intelligence is tricky. We would consider a machine intelligent if it gave us the impression that it understands us and the world we live in. Since we mostly communicate with each other through visual and verbal means, machine vision and natural language understanding are two key aspects we expect from an intelligent machine.

Where Does the Knowledge Come From?

The knowledge that an agent exploits is either handed to it initially ("prior knowledge" obtained from another agent, the person who designed it, its ancestors, etc.) or learned. Even the prior knowledge is somehow acquired (e.g. through evolution, or from the learning done by other agents). Attempts to feed all our human knowledge directly to machines in order to obtain AI has failed because

  • humans do not have complete explicit knowledge of their own operational knowledge (in fact most of our operational knowledge is not in a form that can be readily verbalized, it is hidden in our brain)
  • explicit knowledge expressed by humans generally lacks a required description of associated uncertainty (our world is not well described by facts, it is better described by probabilistic relations)
  • the engineering effort to build a coherent and computer-exploitable description of human knowledge is failing because of the sheer complexity of hand-crafting and maintaining the coherence of all the pieces of knowledge.

Hence it appears that we need learning algorithms for reaching AI. It does not mean that we should not use our knowledge of a particular domain when solving a particular task, since learning algorithms can combine prior knowledge and knowledge acquired from examples. Much of the literature in machine learning (especially concerning graphical models and domain-specific kernels) concentrates on incorporating prior knowledge in learning. This generally offers the best short-term payoff. On the other hand, it also distracts us from longer-term research on more generic methods that would realistically allow us to move closer towards AI. In our lab, we are choosing to focus our efforts on non-parametric learning algorithms, which offer flexible and general functions. When a particular industrial application presents itself, one can (should) always inject a proper dose of engineering to exploit explicit and implicit human knowledge. However, when the objective is to progress in bold steps towards AI, we believe that research in more generic learning algorithms is a better investment, besides being more exciting, because of the potential leverage on a wider variety of tasks.

What is Missing in Machine Learning?

Machine learning algorithms attempt to endow machines with the ability to capture operational knowledge through examples, e.g., allowing a machine to classify or predict correctly in new cases. Machine learning research has been extremely successful in the past two decades and is now applied in many areas of science and technology, a well known example being web search engines. Yet, machines still seem to fall short of even mammal-level intelligence in many respects. One of the remaining frontiers of machine learning is the difficulty of learning the kind of complicated and highly-varying functions that are necessary to perform machine vision or natural language processing tasks at a level comparable to humans (even a 2-year old).

As described in an 2007 NSF report on Future Challenges for the Science and Engineering of Learning, one of the missing ingredients is depth. Deep learning methods aim at learning of feature hierarchies with features from higher-levels of the hierarchy formed by the composition of lower level features. Automatically learning features at multiple levels of abstraction allows a system to learn complex functions mapping the input to the output directly from data, without depending completely on human crafted features. This is especially important for higher-level abstractions, which humans often do not know how to specify explicitly. Complexity theory theorems suggest that one of the missing ingredients in current learning algorithms is depth of architecture, i.e., whereas a depth-k architecture could represent a particular function efficiently, a depth (k-1) architecture might need an exponential number of components, and thus a huge number of training examples.

In addition to depth of architecture, we have found that another ingredient is crucial: distributed representations. We and others have found that most non-parametric learning algorithms suffer from the so-called curse of dimensionality. We would prefer to call it the curse of highly-varying functions or the curse of locality. That curse occurs when the only way a learning algorithm generalizes to a new case x is by exploiting only a raw notion of similarity (such as Euclidean distance) between the cases. This is typically done by the learner looking in its training examples for cases that are close to x according to some similarity measure. One can often interpret what these algorithms do as some kind of interpolation between the neighboring examples. Imagine trying to approximate a function by many small linear or constant pieces. We need at least one example for each piece. We can figure out what each piece should look like by looking mostly at the examples in the neighborhood of each piece. If the target function has a lot of variations, we'll need correspondingly many training examples. In dimension d (or on a manifold of dimension d), the number of variations may grow exponentially with d, hence the number of required examples. However, if we are lucky, we may still obtain good results when we are trying to discriminate between two highly complicated regions (manifolds), e.g. associated with two classes of objects. Even though each manifold may have many variations, they might be separable by a smooth (maybe even linear) decision surface. That is the situation where local non-parametric algorithms work well. They also work comparatively well when the distribution of examples is very noisy, because it is difficult to capture much signal then (and a smooth predictor is pretty much the best one can do).

Distributed representations are transformations of the data that compactly capture many different factors of variations present in the data. Because many examples can inform us about each of these factors, and because each factor may tell us something about examples that are very far from the training examples, it is possible to generalize non-locally, and escape the curse of dimensionality. The simplest distributed representation is a linear transformation of the input example (e.g., learned by Principal Components Analysis, Partial Least Squares, Independent Component Analysis or other more recent techniques, often called Factor Models). To represent more complicated functions, we need non-linear factor models.

Why Are We Optimistic Now?

Until 2006, attempts at training deep architectures (e.g. multi-layer neural networks with more than 2 hidden layers) were unsuccessful. In the last two years several strategies have been proposed and successfully demonstrated to train deeper architectures. A prominent example of such algorithms are the Deep Belief Networks, which are graphical models structured as stochastic neural networks with many layers, and that can trained in an unsupervised way, one layer at a time. Each layer represents a factor model for the layer below, and altogether we obtain a highly non-linear factor model. Since 2006, the number of papers on deep architectures has grown very quickly, several other algorithms for deep architectures have been proposed, exciting experimental results have been obtained on a wide variety of tasks, and funding agencies are starting to perceive the importance of this research question.

What's Next?

One basic hypothesis of our work is that the main stumbling block explaining past failures at training deep architectures is due to an optimization difficulty: learning with deep architectures involves a difficult optimization problem, whose exact solution is intractable. The optimization problem is made difficult because of the presence of local minima and plateaus. However, it is encouraging to note that animals and humans seem to apply sub-optimal but wholly adequate strategies (that we have yet to fully elucidate!). The optimization problem is made difficult because of the presence of local minima and plateaus. The working strategies we know of involve at least one of these ingredients:

  1. guiding the optimization or
  2. the exploration of many solutions (regions in function space) in parallel.
Since deeper networks are more difficult to train than shallow ones, it is not surprising to find that many of the 'guiding' strategies can be seen as giving hints to the hidden layers, to help them move to regions near good solutions to the learning task. It is conjectured that in order to generalize to a wide array of tasks (e.g. all related to vision), unsupervised and semi-supervised learning (using mostly as data examples that have not been hand-labeled) is crucial. One way to unify many successful strategies for training deep architectures is to observe that they exploit the idea that one can get near a good solution to the optimization problem by considering a smoother training criterion. Our objective is to further improve our understanding of these algorithms and to exploit it to develop new strategies for training deep architectures, taking inspiration from the natural world (how humans manage to learn such complicated tasks). An example of inspiration from the natural world comes from the way in which children develop and learn new skills. Whereas machine learning algorithms are typically provided with a single homogeneous training set of examples, children learn in stages, moving to a next stage only after having mastered the previous stage, and they care mostly about examples illustrating the concepts that are on the frontier of their understanding of the world. That is why schooling is normally organized in the form of a curriculum. This principle is also called shaping in the world of animal training, where a sequence of gradually more difficult tasks is set up for an animal.

Elements of research

Without going into details, here is a certain number of elements that tie in many of the research efforts in the lab.

  • In order to learn highly complex functions necessary to AI, one need great quantities of data. This effectively limits us to learning algorithms that are online, or at least that have computation time that scales no worse than linearly in the number of examples.

  • such data sources are mostly unlabeled: we therefore need algorithms capable of both unsupervised and semi-supervised learning.

  • the great number of concepts that will need to be learned to build an AI are not independent: they are all related to one common reality (the "human world", the physical world as we perceive it, and the relations between entities of this world). With regard to their great number, it would be preposterous to try and learn them separately. We will then need to use multi-task learning, and probably also multi-modal learning (e.g. sound and image are two different perspectives on a single reality).
Most of the concepts that our AI should learn will be learned in an unsupervised way (i.e., humans do not specify what is the set of concepts and their meaning), although the learning algorithms could take "hints" from their stream of examples (e.g., by "naming" objects in images and presenting the text with the corresponding image).

  • the internal representations (of the input, the context) within these architectures will need to be distributed. Basically, this means that N bits or numbers may represent a number of things that is exponential in N, not linear (in most probabilistic models, N-1 numbers are required to represent the N possible values of a discrete random variable). With a distributed representation one may represent the probabilities for $O(2^N)$ configurations with $O(kN)$ numbers, where k depends on the complexity of that distribution.

  • an implicit assumption made in Deep Belief Nets, and that we believe fundamental for humans, is that high level functions (concepts that are abstract) can be expressed in relatively simple manners with respect to other, less abstract functions, with many such levels of definition (consider the analogy with dictionary definitions). The implicit consequence is that the lower level functions can be learned separately, before undertaking the learning of the higher level functions. Humans learn basic math before building on it with more complex math. Learning algorithms that exploit this principle draw strength from composition, with elements that can be reused. The simple elements are learned before the complex ones that use them, simply from the structure of the distribution of the observable data. Hence we must have the ability to learn deep architectures (many levels of representation) and we believe that simpler concepts (such as those represented in lower levels) should be learned first.

  • we have decided to focus on the following modalities: single images, text, music (see Gamme, the music-oriented part of our lab), and video. Visual and linguistic modes are central to human communication and a huge amount of information about the human world is available in images, text (mostly web pages), and video. And music is fun, of course...

The Baby AI School project

The Baby AI School project is a way to make concrete the above goals, with the additional insight that a major source of data about "the world of humans" comes from human-generated productions such as videos, television shows, or web pages, and that images (or image sequences) and natural language (in the form of textual captions or of sound) are the main modalities involved. In addition, any AI will presumably communicate with humans through these modalities, an AI will need to master these modalities. Furthermore, it seems obvious these modalities reinforce each other in the process of extracting the regularities we try to learn.

In the short term we want to explore the limitations of current algorithms for deep architectures through a series of gradually more complex artificial worlds from which we generate examples. There are two advantages to this "starting small" strategy. Firstly, it is easier to understand what works and does not work when the setting and the concepts to learn are simpler. Second, once our algorithms have mastered a set of concepts, it makes sense to apply them to more complex ones and we can even initialize the new models using the models trained on the simpler concepts. In the medium term, we would like to add to virtual data (for which we know the underlying true semantics) unlabeled data from a very rich source such as TV programs.

Some of the basic research questions that we explore in this setting are the following:

  • Learning features reduces the need for human effort in crafting useful representations of the data. Researchers in fields such as computer vision have spent considerable energy engineering useful feature sets for various tasks and as new tasks emerge more effort will be required to devise new ones. The demonstrated ability of deep learning methods to automatically learn feature-sets from data has the potential to greatly expand the rate of progress in existing applications and could dramatically increase the rate of emergence of new applications for machine learning. We want to explore more learning algorithms that extract explanatory factors or features, as they are important building blocks.

  • Learning features from vast quantities of data has the potential to develop features of greater complexity (at higher levels of the hierarchy) than is achievable by human engineering. These features, reflecting invariances extant in the data, can increase the performance and robustness of a system that uses these features, as has already been confirmed by us and others, not only in classification tasks, but also in regression, dimensionality reduction, modeling textures, language modeling, information retrieval, robotics, and collaborative filtering.

  • context representation: we are trying to generalize the deep architectures so they represent and learn the dynamics, the context; this is related to older research on long term dependencies and about generalizing probabilistic graphical models with distributed representations.

  • many levels of abstraction, temporal hierarchy: the need to represent events at many time scales, with finer scales defining hierarchically what happens at coarser scales.

  • multi-modality with partial observability: in some cases we only observe images, in others only text and sound, etc. We would like our algorithms to be able to infer "explanations" about the input that can take into account the modalities that are available in each particular case. This implies forward propagating paths for each modality that can be disjoint or connected, according to the need.

  • asynchronicity: the image and text streams are completely asynchronous. How to unify them in the perspective of learning the joint between the two.

  • computational efficiency: the images coming from a video can be quite big. So is the vocabulary. We would like to process them in a reasonable time, so the learning time per frame is of the order of the inter-frame time. This might imply parallelism, or implementation optimizations, or special hardware.

-- YoshuaBengio - 15 August 2008

toggleopenShow attachmentstogglecloseHide attachments
Topic attachments
I Attachment Action Size Date Who Comment
jpgjpg AI-brain.jpg manage 24.6 K 20 Aug 2008 - 08:14 YoshuaBengio  
jpgjpg AI-vs-natural-stupidity.jpg manage 50.6 K 20 Aug 2008 - 08:19 YoshuaBengio  
Edit | WYSIWYG | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r32 < r31 < r30 < r29 < r28 | More topic actions
 
Home
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback