IFT 6268, Winter 2016

Machine Learning for Vision




Time and Place

Tuesdays 14:30pm - 16:30pm
Z-260 Pav. Claire-McNicoll

Wednesdays 15:30pm - 17:30pm
Z-205 Pav. Claire-McNicoll

Instructor

Roland Memisevic
Office : 3349, Pav. Andre-Aisenstadt
Office hours: drop in or by appointment.
email: memisevr@iro.umontreal.ca

Topics

Machine learning and visual perception; natural image statistics; learning visual features; bio-inspired models; learning mid-level features, motion, structure; vision and "big data".

Course description

Machine learning has made huge progress in recent years in building vision systems from the bottom up: by utilizing large amounts of image data and a minimum of hand-tweaking or human intervention. Most of the recent advances in visual perception are due to the fact that images are not "random": The set of natural images (imagine all photos that one could ever take on earth) is much smaller than the vast space of all possible images (all possible configurations of colored pixels). Natural images, for example, tend to contain large, homogeneous areas, separated by few edges and fewer junctions. The combination of these features gives rise to objects, people, landscapes, and other things we typically see and care about. By learning to utilize this inherent statistical structure in images, data-driven learning makes it possible to simplify the task of making sense of images, and to translate advances in compute power into advances in building vision systems.

In this course we will survey recent research on machine learning of visual perception, paying particular attention to bottom-up, data-driven methods and the statistics of natural images. The format of the course will be a mix of lectures and discussions of recent papers in this emerging field. Final projects will be research based and may ultimately lead to a research paper in this area.

Prerequisites

Familiarity with calculus, linear algebra and statistics is required. A background in vision is not required. Some experience with machine learning, for example, as taught in IFT6141 (Reconnaissance des formes) or IFT6390 (Fondements de l'apprentissage machine) will be useful. If unsure about the sufficiency of your background, contact instructor.


Marking scheme

Tentative Time Table

Introduction / background
Date Topic Readings Notes
Jan 6 Intro notes
Jan 19 Review of basic probabilities/linear algebra/ML notes
Jan 20 Lecture: Some biological aspects of vision Reading 1: Attneave 1954 notes
Jan 26 Lecture: Back-prop and neural networks Reading 2: sparse coding (Foldiak, Endres; scholarpedia 2008) notes
Reading 1 due
Feb 2 Lecture: Convolutional networks Reading 3: cudnn Reading 2 due
notes
Feb 3 Lecture: Convolutional networks (contd.)
Feb 9 Lecture: Convnets and Fourier (I) Reading 4: imagenet Sections 1 and 2, skim the rest. Reading 3 due
notes
Feb 10 Lecture: Fourier (II) notes
(A1 deferred to next week)
Feb 16 Lecture: Fourier (III) Reading 5: "alex-net" notes
Assignment 1
Reading 4 due
Feb 17 Lecture: Fourier (III) contd.
Feb 23 Paper presentations and discussion Reading 6: Thrun 1996

papers:
  • Going Deeper with Convolutions Szegedy et al. 2014 [pdf]
    The paper describes GoogLeNet which won the 2014 ImageNet competition with a performance most people doubted would be possible anytime soon. (see also this related paper) (Florian B.)
  • Batch Normalization: Accelerating Deep network training by reducing internal covariate shift Ioffe, Szegedy 2015 [pdf](Guillaume B.)
  • Very Deep Convolutional Networks For Large-scale Image Recognition Simonyan Zisserman 2015 [pdf] A CNN architecture that has become very common. (Olexa B.)
Reading 5 due
Feb 24 Paper presentations and discussion papers:
  • Deep Residual Learning for Image Recognition He et al., 2015 [pdf]
    Training very very deep nets, ILSVRC 2015 winner. (Thomas G.)
  • Spatial Transformer Networks Jaderberg et al., 2016 [pdf]
    (Vincent M.)
A1 due
Mar 8 Paper presentations and discussion papers:
  • Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture Eigen, Fergus 2014 [pdf] (dataset)
    Extract richer scene information with conv-nets. (Chinna S.)
  • DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition Donahue et al. [pdf] One (of several) papers demonstrating the generality of learned convnet features. (also see: this related paper) (Luiz G. H.)
  • Learning Visual Features from Large Weakly Supervised Data (Joulin, v.d.Maaten, Jabri, Vasilache; 2015) [pdf] (Daniel E.)
    Can we do well without imagenet.
Reading 6 due
Mar 9 Paper presentations and discussion Reading 7: MS COCO (skim the details)

papers:
  • Rich feature hierarchies for accurate object detection and semantic segmentation (Girshik et al. 2014) [pdf]
  • Fast R-CNN (Girshik 2015) [pdf] see also faster r-cnn
    (Danlan C.)
  • You Only Look Once: Unified, Real-Time Object Detection (Redmon et al. 2015) [pdf] (Sarath C.)
  • C3D: Generic Features for Video Analysis Tran et al. 2015 [pdf] (Faruk A.)
Assignment 2
Mar 15 -- class canceled --
Mar 16 -- class canceled --
Mar 22 Lecture: Reinforcement Learning Reading 8: VQA: Visual Question Answering (without appendix)

notes
Mar 23 Paper presentations and discussion *** No more reaction reports required. ***
  • Image Super-Resolution Using Deep Convolutional Networks (Dong et al. 2015) [pdf] (Mario B.)
  • DenseCap: Fully Convolutional Localization Networks for Dense Captioning (Johnson et al. 2015) [pdf] (Sarath C.)
  • Visual7W: Grounded Question Answering in Images Zhu et al 2015 [pdf] (Vincent M.)
*** No more reaction reports required. ***
A2 due
Reading 7 due
Mar 29 Paper presentations and discussion Reading 9: deep dream blog post, also have a look at the code

*** No more reaction reports required. ***
  • Ask Your Neurons: A Neural-based Approach to Answering Questions about Images (Malinowski et al, 2015) [pdf] see also the DAQUAR dataset (Faruk A.)
  • Natural Language Object Retrieval (Hu et al, 2015) [pdf] (Luiz H.) see also the dataset
Reading 8 due
Mar 30 Paper presentations and discussion *** No more reaction reports required. ***
  • Multiple Object Recognition with Visual Attention (Ba et al., 2015) [pdf] see also this earlier paper (Mnih et al) (Olexa B.)
  • DRAW: A Recurrent Neural Network For Image Generation (Gregor et al., 2015) [pdf](Daniel E.)
  • A Neural Algorithm of Artistic Style (Gatys et al, 2015) [pdf] (Thomas G.)
April 5 Paper presentations and discussion Reading 10: Carefully read one paper that is highly relevant to your final project. In your email state which paper you read.

*** No more reaction reports required. ***
  • DeepStereo: Learning to Predict New Views from the World's Imagery (Flynn et al, 2015) [pdf] (Florian B.)
  • Pixel Recurrent Neural Networks (van den Oord, et al, 2016) [pdf] (Guillaume B.)
  • Render for CNN: Viewpoint Estimation in Images Using CNNs Trained with Rendered 3D Model Views (Su et al, 2015) [pdf] (Mario B.)
Reading 9 due
April 6 *** No more reaction reports required. ***
  • MovieQA: Understanding Stories in Movies through Question-Answering (Tapaswi et al, 2015) [pdf] (Danlan C.)
  • Unsupervised Representation Learning With Deep Convolutional Generative Adversarial Networks (Radford et al, 2016) [pdf] (Chinna S.)
  • Visual genome (Krishna et al. 2016)[pdf]
  • Dynamic Memory Networks for Visual and Textual Question Answering (Xiong et al, 2016) [pdf]
April 12 Lecture: Generative and variational methods Reading 10 due
notes
April 13 Final project presentations and discussions

Resources

Software

Datasets