Deep Learning for AI
IJCAI 2018 Tutorial (July 13, 2018)
Yoshua Bengio
University of Montreal & MILA
The slides for this tutorial are now available here.
Keywords: machine learning, representation learning, deep learning
Abstract: There has been much progress in AI thanks to advances in deep learning in recent years, especially in areas such as computer vision, speech recognition, natural language processing, playing games, robotics, machine translation, etc. This tutorial aims at explaining some of the core concepts and motivations behind deep learning and representation learning. Deep learning builds on many of the ideas introduced decades earlier with the connectionist approach to machine learning, inspired by the brain. These essential early contributions include the notion of distributed representation and the back-propagation algorithm for training multi-layer neural networks, but also the architecture of recurrent neural networks and convolutional neural networks. In addition to the substantial increase in computing power and dataset sizes, many modern additions have contributed to the recent successes. These include techniques making it possible to train networks with more layers - which can generalize better (hence the name deep learning) - as well as a better theoretical understanding for the success of deep learning, both from an optimization point of view and from a generalization point of view. They also include advances in architectures which have moved neural nets from pattern recognition devices working on vectors to general-purpose differentiable modular machines which can handle arbitrary data structures, mostly thanks to the use of attention mechanisms. Two other areas of major progress have been in unsupervised learning, in particular the ability of neural networks to stochastically generate high-dimensional samples (like images) from a possibly conditional distribution, as well as the combination of reinforcement learning and deep learning techniques, not just for traditional applications of reinforcement learning like games, but also as a way to handle learning in systems involving non-differentiable or black-box components. The tutorial will end with a discussion of some major open problems for AI which are at the forefront of research in deep learning.
|
Speaker Bio
Yoshua Bengio (computer science PhD, 1991, McGill U; post-docs at MIT and Bell Labs, computer science professor at U. Montréal since 1993): he authored three books, over 300 publications (h-index over 100, over 100,000 citations), mostly in deep learning, holds a Canada Research Chair in Statistical Learning Algorithms, is Officer of the Order of Canada, recipient of the Marie-Victorin Quebec Prize 2017, he is a CIFAR Senior Fellow and co-directs its Learning in Machines and Brains program. He is scientific director of the Montreal Institute for Learning Algorithms (MILA), currently the largest academic research group on deep learning. He is on the NIPS foundation board (previously program chair and general chair) and co-created the ICLR conference (specialized in deep learning). He pioneered deep learning and his goal is to uncover the principles giving rise to intelligence through learning, as well as contribute to the development of AI for the benefit of all.
|
|
Tutorial outline
- Motivating introduction to deep learning
- Why deep learning works so well:
- Distributed representation & depth: compositional priors to defeat the curse of dimensionality
- High-dimensional non-convex optimization may be easier than initially thought
- Backprop; efficiency and regularization effects of stochastic gradient descent
- Convolutional neural networks & parameter sharing
- Recurrent neural networks & long-term dependencies
- What's new with neural nets since the 90s
- Depth, piecewise-linear activation functions & skip connections
- Attention mechanisms, memory and operating on data structures
- Autoencoders and deep generative models
- Advances in transfer learning and meta-learning
- Architectures & applications to natural language processing
- Architectures & applications involving images
- Deep reinforcement learning and playing games
- What next?
- Challenges of unsupervised representation learning
- Challenges of agent learning, deep reinforcement learning
- From perception to higher cognition, from competence to comprehension
|
References
|
|
|