Recent Research Highlights
NEW CONFERENCE ON REPRESENTATION LEARNING: ICLR
Almost full list of publications
Selected Recent Papers
- Radically new approaches to deep unsupervised learning
with joint training of all levels, avoiding marginalizing/MAP/MCMC over latent variables:
- Exploiting the recent advances in understanding the probabilistic interpretation of auto-encoders
in order to perform credit assignment without backprop and train deep generative models:
How Auto-Encoders Could Provide Credit Assignment in Deep Networks via Target Propagation.
Deep Generative Stochastic Networks Trainable by Backprop,
Yoshua Bengio, Eric Thibodeau-Laufer and Jason Yosinski, Université de
Montréal, arXiv report 1306.1091, 2013 (also, an ICML'2014 paper).
Generalized Denoising Auto-Encoders as Generative Models,
Yoshua Bengio, Li Yao, Guillaume Alain, and Pascal Vincent, Université de
Montréal, arXiv report 1305.6663, 2013 (also, an NIPS'2013 paper
- Python/Theano code for
GSNs and the experiments in the above 2 papers.
- Four challenges of deep learning, and ideas to attack them:
scaling computation, optimization, inference & sampling, disentangling.
- Figuring out what regularized auto-encoders
are doing in terms of capturing the data generating
distribution, and exploiting this to sample from them:
- Recurrent nets are back!
- For breaking through the SOTA in machine translation: Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation, EMNLP'2014.
the difficulty of training recurrent neural networks,
Razvan Pascanu, Tomas Mikolov and Yoshua Bengio,
- Advances in
Optimizing Recurrent Networks, Yoshua Bengio, Nicolas
Boulanger-Lewandowski, Razvan Pascanu, arXiv report 1212.0901,
Temporal Dependencies in High-Dimensional Sequences:
Application to Polyphonic Music Generation and Transcription,
Nicolas Boulanger-Lewandowski, Yoshua Bengio and Pascal
Vincent, in: Proceedings of the Twenty-nine International
Conference on Machine Learning (ICML'12), ACM, 2012
- Deeper representations can help sample better by
reducing the mixing problem with most MCMC
methods, and unfold the data manifold
- Maxout Networks combine dropout noise injection with max-linear operations
to beat SOTA on image datasets.
Maxout Networks, Ian Goodfellow, David Warde-Farley, Mehdi Mirza,
Aaron Courville and Yoshua Bengio, ICML'2013
- A theory relating the evolution of culture and
memes with local minima in deep neural
- Hyper-parameters can be optimized in a
systematic way that can make machine learning experiments more
reproducible and more computationally efficient
- Representation learning for NLP, representing not just words but
phrases or sentences, machine translation, semantic graphs,
multiple datasets on overlapping variables, or knowledge bases
- Spike-and-slab RBM and sparse coding do a
good job at modeling pixels and their interactions in images,
learning features that excel at transfer learning and
generating outstanding images
- Contractive Auto-Encoders learn manifold
structure, beating the state-of-the-art in knowledge-free
MNIST and facial expression detection
Auto-Encoders: Explicit invariance during feature extraction,
Salah Rifai, Pascal Vincent, Xavier Muller, Xavier Glorot and
Yoshua Bengio, in: ICML'2011
factors of variation for facial expression recognition,
Salah Rifai, Yoshua Bengio, Aaron Courville, Pascal Vincent
and Mehdi Mirza, in: ECCV'2012
Manifold Tangent Classifier, Salah Rifai, Yann Dauphin,
Pascal Vincent, Yoshua Bengio and Xavier Muller, in: NIPS'2011
Machine Learning Competitions Won
- Winning the ICMI 2013 Grand Challenge on Emotion Recognition in the Wild!
The challenge baseline accuracy was 27.5% - our approach yielded 41.0%)
Kahou, S. E., Pal, C., Bouthillier, X., Froumenty, P., Gulcehre, C., *, Memisevic, R., Vincent, P., Courville, A. and Bengio, Y. (2013)
Combining Modality Specific Deep Neural Networks for Emotion Recognition in Video. In Proceedings of the 15th ACM International Conference on Multimodal Interaction (ICMI '13) pp. 543-550. [ACM digital library definitive version].
*Note: Please see the additional authors section in the .pdf of the paper above for the full author list. Additional authors should be inserted at the *.
and Transfer Learning Challenge, presented at an ICML 2011
and IJCNN 2011 workshops of the same name, was won
by LISA members using unsupervised layer-wise pre-training
- We also won the Transfer
Learning Challenge at NIPS 2011's Challenges in Learning
Hierarchical Models Workshop, using spike-and-slab
sparse coding (ICML 2012 paper)
Review Papers and Books
Deep Learning - an MIT Press book in preparation
- Unsupervised Feature
Learning and Deep Learning: A Review and New Perspectives,
Yoshua Bengio, Aaron Courville and Pascal Vincent, U. Montreal,
arXiv report:1206.5538, 2012
recommendations for gradient-based training of deep
architectures, Yoshua Bengio, U. Montreal, arXiv
report:1206.5533, Lecture Notes in Computer Science Volume 7700,
Neural Networks: Tricks of the Trade Second Edition, Editors:
Grégoire Montavon, Geneviève B. Orr, Klaus-Robert
Learning of Representations, Yoshua Bengio and Aaron
Courville, in: Handbook on Neural Information Processing,
Springer: Berlin Heidelberg, 2012
Learning of Representations for Unsupervised and Transfer
Learning, Yoshua Bengio, in: JMLR W&CP: Proc.
Unsupervised and Transfer Learning challenge and workshop, pages
deep architectures for AI, Yoshua Bengio (2009), in:
Foundations and Trends in Machine Learning, 2:1(1--127). Also
published as a book. Now Publishers, 2009.