r2 - 17 Feb 2009 - 20:45:36 - YoshuaBengioYou are here: TWiki >  Public Web  > DeepNetworks > DeepLearningWorkshopNIPS2007

Deep Learning Workshop: Foundations and Future Directions


Hyatt hotel, Vancouver, Canada, December 6th, 2007, 2pm-5:30pm, Georgia B room, 2nd floor
Organizers: Yoshua Bengio, Yann LeCun, Ruslan Salakhutdinov and Hugo Larochelle

This is the afternoon part of NIPS'2007 special Neuro-Thursday, sponsored by the Canadian Institute For Advanced Research (CIFAR).

Workshop description

Theoretical results strongly suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one may need "deep architectures", which are composed of multiple levels of non-linear operations (such as in neural nets with many hidden layers). Searching the parameter space of deep architectures is a difficult optimization task, but learning algorithms (e.g. Deep Belief Networks) have recently been proposed to tackle this problem with notable success, beating the state-of-the-art in certain areas.

This workshop is intended to bring together researchers interested in the question of deep learning in order to review the current algorithms' principles and successes, but also to identify the challenges, and to formulate promising directions of investigation. Besides the algorithms themselves, there are many fundamental questions that need to be addressed: What would be a good formalization of deep learning? What new ideas could be exploited to make further inroads to that difficult optimization problem? What makes a good high-level representation or abstraction? What type of problem is deep learning appropriate for?

The workshop is also going to be an opportunity to celebrate Geoff Hinton's 60th birthday and his contributions to theories of neural information processing.

Registration Procedure

Registration is required in order to attend the workshop. Free registration for an evening bus going to Whisler is also offered to workshop participants. This bus trip is different from the one offered on the NIPS registration page (i.e. you do not need to ask for it when registering to NIPS).

The number of places at the Hyatt is limited, so early registration is highly recommended!

The registration page can be found here.

Schedule

Videos of all workshop talks are available here.

2:00pm - 2:05pm Introductory remarks |avi|

2:05pm - 2:25pm Yee-Whye Teh, Gatsby Unit : Setting the Stage: Complementary Prior and Variational Bounds |pdf part1.avi part2.avi|

2:25pm - 2:45pm John Langford, Yahoo Research: (Lack of) Deep Learning Theory |pdf avi|

2:45pm - 3:05pm Yoshua Bengio, University of Montreal: Optimizing Deep Architectures |pdf part1.avi part2.avi|

3:05pm - 3:25pm Yann Le Cun, New York University: Learning a Deep Hierarchy of Sparse Invariant Features |pdf part1.avi part2.avi|

3:25pm - 3:45pm Martin Szummer, Microsoft Research: Deep Networks for Information Retrieval |pdf part1.avi part2.avi|

3:45pm - 4:00pm Coffee break

4:00pm - 4:20pm Max Welling, University of California: (Infinite) Deep Networks |ppt part1.avi part2.avi|

4:20pm - 4:40pm Rajat Raina, Stanford University: Self-taught Learning |ppt pptx avi|

4:40pm - 5:00pm Geoff Hinton, University of Toronto: How to do Backpropagation in a Brain |ppt avi|

5:00pm - 5:30pm Discussion |avi|

Relevant literature

First paper introducing Deep Belief Networks (as generative models):

  • A fast learning algorithm for deep belief nets | pdf ps.gz html |
    Hinton, G. E., Osindero, S. and Teh, Y.
    Neural Computation (2006)

Review paper on deep architectures and details of Deep Belief Nets and Restricted Boltzmann Machines:

  • Learning Deep Architectures for AI | link |
    Bengio, Y. Foundations and Trends in Machine Learning (to appear, 2009).

Book chapter about the philosophy behind deep architecture model, motivating them in the context of Artificial Intelligence

  • Scaling Learning Algorithms towards AI | pdf |
    Bengio, Y. and LeCun, Y.
    Book chapter in "Large-Scale Kernel Machines"

Deep Belief Networks as a simple way of initializing a deep feed-forward neural network:

  • To recognize shapes, first learn to generate images | pdf |
    Hinton, G. E.
    Technical Report (2006)

General study of the framework of initializing a deep feed-forward neural network using a greedy layer-wise procedure:

  • Greedy Layer-Wise Training of Deep Networks | pdf tech-report-pdf |
    Bengio, Y., Lamblin, P., Popovici, P., Larochelle, H.
    NIPS 2006

An application of greedy layer-wise learning of a deep autoassociator for dimensionality reduction:

  • Reducing the dimensionality of data with neural networks | pdf support-pdf code |
    Hinton, G. E. and Salakhutdinov, R. R.
    Science 2006

A way to use the greedy layer-wise learning procedure to learn a useful embeding for k nearest neighbor classification:

  • Learning a Nonlinear Embedding by Preserving Class Neighbourhood Structure | pdf |
    Salakhutdinov, R. R. and Hinton, G. E.
    AISTATS 2007

Different theoretical results about Restricted Boltzmann Machines (RBMs) and Deep Belief Networks, like the universal approximation property of RBMs:

  • Representational Power of Restricted Boltzmann Machines and Deep Belief Networks | pdf |
    Le Roux, N. and Bengio, Y.
    Technical Report, to appear in Neural Computation.

A novel way of using greedy layer-wise learning for Convolutional Networks:

  • Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition | pdf |
    Ranzato, M'A, Huang, F-J, Boureau, Y-L, and Le Cun, Y.
    CVPR 2007

How to generalize Restricted Boltzmann Machines to types of data other than binary using exponential familly distribution:

  • Exponential Family Harmoniums with an Application to Information Retrieval | pdf ps |
    Welling, M., Rosen-Zvi, M. and Hinton, G. E.
    NIPS 2004

An evaluation of deep networks on many datasets related to vision:

  • An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation | pdf html |
    Larochelle, H., Erhan, D., Courville, A., Bergstra, J., Bengio, Y.
    ICML 2007

Application of deep learning in the context of information retrieval:

  • Semantic Hashing | pdf |
    Salakhutdinov, R. R. and Hinton, G. E.
    IRGM 2007
toggleopenShow attachmentstogglecloseHide attachments
Topic attachments
I Attachment Action Size Date Who Comment
pdfpdf deep_learning_bengio.pdf manage 430.4 K 15 Jan 2008 - 16:02 HugoLarochelle  
pptppt deep_learning_hinton.ppt manage 254.5 K 15 Jan 2008 - 16:03 HugoLarochelle  
pdfpdf deep_learning_langford.pdf manage 54.2 K 15 Jan 2008 - 16:03 HugoLarochelle  
pdfpdf deep_learning_lecun.pdf manage 8713.6 K 15 Jan 2008 - 16:04 HugoLarochelle  
pptppt deep_learning_raina.ppt manage 2287.0 K 15 Jan 2008 - 16:04 HugoLarochelle  
elsepptx deep_learning_raina.pptx manage 1325.7 K 15 Jan 2008 - 16:04 HugoLarochelle  
pdfpdf deep_learning_teh.pdf manage 278.5 K 15 Jan 2008 - 16:04 HugoLarochelle  
pptppt deep_learning_welling.ppt manage 1174.5 K 15 Jan 2008 - 16:05 HugoLarochelle  
pdfpdf deep_learning_szummer.pdf manage 512.7 K 21 Jan 2008 - 10:49 HugoLarochelle  
Edit | WYSIWYG | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r2 < r1 | More topic actions
Public.DeepLearningWorkshopNIPS2007 moved from Neurones.DeepLearningWorkshopNIPS2007 on 03 Oct 2007 - 15:21 by HugoLarochelle - put it back
 
Home
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback