IFT 6269 - Fall 2017: Class projects

Description

The project gives you the opportunity to study in greater depth some concepts of the course. The topic has to be linked with algorithms, concepts or methods presented in class, but beyond this requirement, the choice is quite open. In particular, it may be tailored to your interests. We encourage you to choose a paper that closely fits your interests, and any personal original contribution is valued.

The standard class projects need to contain the following 3 components:

  1. An article review around a given topic (research articles or chapter from Mike's book not studied in class). See below for a list of tentative projects. This means to read and understand a specific research article.

  2. An implementation of the method.

  3. An experimentation with real data. This means to apply the method on real data and report your findings and observations. If the paper is quite dense and theoretical, then an experimentation on simulated / synthetic data is sufficient.

The project may be in groups of three or four. Once you have an idea of a project, it is mandatory to have it validated by the teacher by submitting a quick description of it on Studium.

Evaluation

The final class project counts for 30%. Evaluation will be made on:

  1. A report (of about 4 to 8 pages) presenting the project and the obtained results (for applicative projects), to be given by December 20th, 2016 on Studium. The report has to be written in such a way that any student who has followed the class can understand (no need to introduce graphical model concepts). The report has to clearly present (in French or English) the studied problem and the existing approaches. You will be more evaluated on the clarity of the report rather than on its length. To train you to write professional research papers, you should use LaTeX in the ICML 2016 template format (download the template here). You may use appendices for additional details beyond 8 pages if you want, but be aware that as in standard conference reviewing, I might only read the first 8 pages (so the main content has to be there), and also, succinctness is more valued here than length!

  2. A poster presentation of 6 minutes, from at most 6-8 letter-sized pages (or as a poster format if you fancy it, but this is not required) which will be displayed on rolling boards during a poster session on Tuesday December 12th from 1:30pm to 4:30pm in mezzanine of Jean-Coutu atrium. The presentation (in French or English) is also geared towards other students and the goal is to highlight in the allocated time the salient points of your project. Like in a regular conference poster session, students are encouraged to attend other student posters. Other guidelines for the poster and presentation:

    1. The poster should be in English so that all students can understand it (but your presentation to me can be in French if you prefer).

    2. The content of your poster has a double purpose:

      1. to explain clearly to other students of the class the model, problems and algorithms you have worked on, and any interesting observations that you have made.

      2. to be the support of your 6 minutes oral presentation of your project

    3. The 6 minutes timing will be strict as we want to be able to ask you a couple of questions in addition and there are many of you. We highly recommend that you prepare ahead of time what you will say during these 6 minutes. Highlight your understanding and the main things you have done (model, main algorithmic ideas, data, results).

IMPORTANT

  1. Each group of students has to obtain the agreement of the teacher on their project by submitting a short description of it on Studium by November 7th.

  2. Each student or group of students has to submit a small mid-project report (one page pdf document) presenting the progress and the obtained results, so that some feedback may be given. This report has to submitted on Studium before November 28th.

Schedule

The various steps are summarized below.

End of October Choose a project (three or four students per project).
Before 11/7 Choose your group and give your project choice on Studium.
Before 11/28 Send a draft (1 page) + first results on Studium.
On 12/12 Poster session
Before 12/20 Submit your project report (4-8 pages, ICML format, on Studium)

Article or chapter review

The goal of this list of projects is to show classical (and if possible interesting) articles using or improving graphical models. This may give you an idea of current research topics as well as applicative projects. Even if you don't select any of these, reading some of them is advised.

Probabilistics PCA Interpretation of PCA as a graphical model close to factorial analysis. A situation where EM has no local minima.

Tipping, M. E.,  Bishop, C. M. 1999. Probabilistic principal component analysis. Journal of the Royal Statistical Society, Series B 61(3):611-622. [pdf]
Learning graph structure - multinomial models
 
For complete discrete data, learning of parameters and directed acyclic graph.

D. Heckerman, D. Geiger, D. Chickering. Learning Bayesian networks: The Combination of Knowledge and Statistical Data.  Machine Learning, 20:197-243, 1995.

Learning graph structure - Gaussian models
 
For complete Gaussian data, learning of parameters and directed acyclic graph.


D. Geiger, D. Heckerman. Learning Gaussian networks.  Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pp. 235--243.
Variational methods for inference Class of method for approximate inference.

An introduction to variational methods for graphical models. M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. In M. I. Jordan (Ed.), Learning in Graphical Models, Cambridge: MIT Press, 1999

Its application to Bayesian inference.

Beal, M.J. and Ghahramani, Z.
Variational Bayesian Learning of Directed Graphical Models with Hidden Variables
To appear in Bayesian Analysis 1(4), 2006.
Simulation methods for inference (particle filtering)
A simulation for dynamic graphical models
Chapter from Kevin Murphy

S. Arulampalam, S. Maskell, N. J. Gordon, and T. Clapp, A Tutorial on Particle Filters for On-line Non-linear/Non-Gaussian Bayesian Tracking, IEEE Transactions of Signal Processing, Vol. 50(2), pages 174-188, February 2002.

Doucet A., Godsill S.J. and Andrieu C., "On sequential Monte Carlo sampling methods for Bayesian filtering," Statist. Comput., 10, 197-208, 2000
Semi-Markovian models A class of models allowing to model the time spent in any given state for a Markov Chain and an HMM.

Note from Kevin Murphy [pdf]
Learning parameters in an undirected graphical model (Markov random fields) Chapter 9 of Mike's book and articles.
Dynamic graphical models Chapter from Kevin Murphy. Specific topics to be defined.
General applications of the sum-product algorithms (e.g., to the FFT) The generalized distributive law, S. M. Aji, R. J. Mceliece
Information Theory, IEEE Transactions on, Vol. 46, No. 2. (2000), pp. 325-343.
Independent Component Analysis A. Hyvarinen, E. Oja (2000): Independent Component Analysis: Algorithms and Application, Neural Networks, 13(4-5):411-430, 2000.

Course of Herve LeBorgne: http://www.eeng.dcu.ie/~hlborgne/pub/th_chap3.pdf
Jean-Francois Cardoso Dependence, Correlation and Gaussianity in Independent Component Analysis. Journal of Machine Learning Research 4(Dec):1177--1203, 2003. [pdf]

Canonical Correlation Analysis

CCA is analogous to PCA for the joint analysis of two random vectors X and Y.
  1. D. R Hardoon, S. Szedmak, et J. Shawe-Taylor, Canonical correlation analysis: an overview with application to learning methodsNeural Computation 16, no. 12 (2004): 2639-2664. 
  2. F. R Bach et M. I Jordan, A probabilistic interpretation of canonical correlation analysis,Dept. Statist., Univ. California, Berkeley, CA, Tech. Rep 688 (2005). 

Clustering through a mixture of PCA

M. E Tipping et C. M Bishop, Mixtures of probabilistic principal component analyzers, Neural computation 11, no. 2 (1999): 443-482.

Stochastic relational models
  • E. M Airoldi et al., Mixed membership stochastic block models for relational data with application to protein-protein interactions, dans Proceedings of the International Biometrics Society Annual Meeting, 2006.

  • Conditional Random Fields
    Charles Sutton, Andrew McCallum An Introduction to Conditional Random Fields for Relational Learning . In Lise Getoor and Ben Taskar, editors, Introduction to Statistical Relational Learning. MIT Press. 2007.


    Dirichlet Process
  • R. M Neal, Markov chain sampling methods for Dirichlet process mixture models, Journal of computational and graphical statistics 9, no. 2 (2000): 249-265. 
  • Y. W Teh,  Dirichlet Process,  Submitted to Encyclopedia of Machine Learning (2007). 
  • A. Ranganathan,  The Dirichlet Process Mixture (DPM) Model” (2004). 


  • Factorial HMM
    Z. Ghahramani et M. I Jordan,  Factorial hidden Markov modelsMachine learning 29, no. 2 (1997): 245-273

    Generalized PCA
    M. Collins, S. Dasgupta, et R. E Schapire, A generalization of principal component analysis to the exponential family, Advances in neural information processing systems 1 (2002): 617-624. 

    Structure learning by L1 regularization

    J. Friedman, T. Hastie, et R. Tibshirani, Sparse inverse covariance estimation with the graphical lasso, Biostatistics 9, no. 3 (2008): 432. 


    Mixture of log-concave densities


     
     Interesting non-parametric models for unimodal distributions.

  • T Chang et G. Walther,  Clustering with mixtures of log-concave
    distributions
    , Computational Statistics & Data Analysis 51, no. 12
    (2007): 6242-6251.
  • G. Walther, Inference and modeling with log-concave distributions, Statistical Science 25 (2010).


  •  

    Examples of applications of Graphical models

    These articles present classical applications. They may give you ideas for an applicative project or may be used for article reviews.

    Bioinformatics Chapter 23 of Mike's book.

    Phylogenetic HMM:
    A. Siepel et D. Haussler, Phylogenetic hidden Markov models, Statistical methods in molecular evolution (2005), 3, 325-351. 

    Vision/Speech Articles from Kevin Murphy:
    "Using the Forest to See the Trees:A Graphical Model Relating Features, Objects and Scenes"  Kevin Murphy, Antonio Torralba, William Freeman. NIPS'03 (Neural Info. Processing Systems)

    Dynamic Bayesian Networks for Audio-Visual Speech Recognition  A. Nefian, L. Liang, X. Pi, X. Liu and K. Murphy. EURASIP, Journal of Applied Signal Processing, 11:1-15, 2002

    Optimization for MAP inference in computer vision:
    MRF Optimization via Dual Decomposition: Message-Passing Revisited, Komodakis, Paragios, Tziritas, ICVV 2007. Longer technical report version

    Robotics Automatic construction of maps
    Simultaneous Localization and Mapping with Sparse Extended Information Filters
    Thrun et al. The International Journal of Robotics Research.2004; 23: 693-716.
    (see also chapter 15 of Mike's book on Kalman filtering)
    Text Naive Bayes:
    A. McCallum and K. Nigam. A comparison of event models for Naive Bayes text classification. In AAAI-98 Workshop on Learning for Text Categorization, 1998. 

    Latent Dirichlet allocation. D. Blei, A. Ng, and M. Jordan. Journal of Machine Learning Research, 3:993-1022, January 2003. [.pdf | code]
    topic modeling webpage
    Text - Natural language processing S. Vogel, H. Ney, and C. Tillmann. HMM-based word alignment in statistical translation. In Proceedings of the 16th conference on Computational linguistics, pp. 836-841, Morristown, NJ, USA, 1996. Association for Computational Linguistics.

    Non contextual probabilistic grammars:
    Notes de cours de CMU, 1999

    Implementation of algorithms

    N most probable configurations Implementation of an algorithm (HMM or more complex graphs), from the following articles:
    Dennis Nilsson, Jacob Goldberger
    An Efficient Algorithm for Sequentially finding the N-Best List , IJCAI, 1999

    Chen Yanover, Yair Weiss, Finding the M Most Probable Configurations Using Loopy Belief Propagation, NIPS 2003.
    Computation of tree-width Comparing the classical heuristics and finer methods:

    Mark Hopkins and Adnan Darwiche
    A Practical Relaxation of Constant-Factor Treewidth Approximation Algorithms
    Proceedings of the First European Workshop on Probabilistic Graphical Models 2002

    Also some exact methods
    Stefan Arnborg, Derek G. Corneil, Andrzej Proskurowski, Complexity of finding embeddings in a k-tree, SIAM Journal on Algebraic and Discrete Methods (1997)

    Generative models and deep learning

    Recurrent Neural Networks (RNN) Survey paper: Zachary C. Lipton, John Berkowitz, Charles Elkan, A Critical Review of Recurrent Neural Networks for Sequence Learning, arXiv:1506.00019v4 [cs.LG]

    See also this list for many more pointers
    Variational Auto-Encoder (VAE) Diederik P. Kingma, Max Welling, Auto-Encoding Variational Bayes, ICLR 2014.
    See also this tutorial.

    Complicated combination of VAE with graphical model: Matthew Johnson, David K. Duvenaud, Alex Wiltschko, Ryan P. Adams, Sandeep R. Datta, Composing graphical models with neural networks for structured representations and fast inference, NIPS 2016.
    Generative Adversarial Networks (GAN) Ian Goodfellow, NIPS 2016 Tutorial: Generative Adversarial Networks, arXiv:1701.00160v4 [cs.LG].
    Neural Autoregressive Distribution Estimation (NADE) Benigno Uria, Marc-Alexandre Côté, Karol Gregor, Iain Murray, Hugo Larochelle, Neural Autoregressive Distribution Estimation, JMLR 2016.

    ICLR 2018 Reproducibility Challenge

    See this webpage for instructions. The idea is to replicate the experiments from an ICLR 2018 submission. The constraint for this class is that the topics of the chosen paper have to be related to the course content (generative modeling, graphical models, approximate inference, etc.). A paper which only uses CNN for some supervised learning task for example is not suitable… But a paper which uses CNN to do generative modeling could be suitable.