This course provides a unifying introduction to statistical modeling of multidimensional data through the framework of probabilistic graphical models, together with their associated learning and inference algorithms.
Teacher: Simon Lacoste-Julien, Office: 3339 André-Aisenstadt
Office hours: Friday 15h30-16h30
TA: Gabriel Huang, Office: 5512 André-Aisenstadt
Office hours: Tuesday 16h30-17h30
Tuesday 14h30-16h30 - G-815 Pav. Roger-Gaudry
Friday 13h30-15h30 - S-142 Pav. Roger-Gaudry
Probability review
Maximum likelihood estimation
Linear regression, logistic regression, Fisher discriminant
K-means, EM, Gaussian mixtures
Directed and undirected graphical models
Exponential family, information theory
Gaussian networks
Factor analysis
Sum-product algorithm, HMM, junction tree
Approximate inference: sampling, variational methods
Estimation of parameters in graphical models
Bayesian methods
Model selection
Homework (50%) – about 5 | homework logistics below
Project (30%) – project report to hand in + poster presentation on Dec 11th | detailed info about projects
Final exam (20%) – take-home exam, after poster presentation
The prerequisites are previous coursework in linear algebra, multivariate calculus, and basic probability and statistics. There will be programming for the assignments, so familiarity with some matrix-oriented programming language will be useful (no specific language required; examples include Matlab/Octave, Python with numpy, etc.)
The course will follow the (unpublished) manuscript An Introduction to Probabilistic Graphical Models by Michael I. Jordan that will be made available to the students (but do not distribute!).
Supplementary references:
For very detailed and rigorous reference: Probabilistic Graphical Models: Principles and Techniques by Daphne Koller and Nir Friedman. Referred as KF in outline below.
See Part I of the Deep Learning book by Ian Goodfellow, Yoshua Bengio and Aaron Courville for a very gentle review of applied maths useful for this class. Chapter 5 contains a useful presentation of machine learning basics. Referred as DL in outline below.
Another classical book, with a more Bayesian perspective than Mike's book, but at least completed, is Pattern Recognition and Machine Learning, by Chris Bishop.
The homework is to be handed in on paper form at the beginning of the class (Tuesday) on the due date (derivations | proofs | report | graphs). The code (in the language of your choice) is submitted on Studium as a zipped file (or tar.gz) with a README explaining to the TA how to run it | test it.
Collaboration policy: you can collaborate with colleagues while working on the homework, but you need to write your own independent write-up. And if you have collaborated with others on a question, you need to credit the help of your colleagues by specifying them in the write-up (proper acknowledgment is good practice for you have for academia later).
Late homework policy:
You have a budget of 6 late days that you can spend on the 5 homework. To use these days, you need to declare it by writing it on the late homework when you hand it in.
To hand in a homework late, you need to drop it in the designated box in Yoshua's administrative assistant office in 3245 André-Aisenstadt during business hours (Julie Mongeau's office) i.e. before 5pm. You also need to send an email to the TA with the subject line “[IFT6269 late]” when this is done so that he is aware of your late homework.
The late day penalty will be the following (as deadlines are on Tuesday):
handed in Tuesday after beginning of class: 10% penalty (or 1 day late used)
handed in Wednesday: 20% penalty (or 1 day late used)
handed in Thursday: 40% penalty (or 2 days late used)
handed in Friday: 80% penalty (or 3 days late used)
handed in later: you won't get any credit (or you have to have used some of your late days to reduce the number of days counted late)
handed in Monday: 100% penalty (or 4 days late used)
handed in Tuesday (one week later): 100% penalty (or 5 days late used)
No assignment accepted more than one week late
Below is a draft detailed outline that will be updated as the class goes on. For now it is the outline recopied from the Fall 2017 version of this class, with the links to the relevant old scribbled notes that will be updated with the new ones gradually. The related chapters in Mike's book are given (but note that they do not exactly correspond with the class content), and also sometimes pointers to the Koller and Friedman book (KF), the Deep Learning book (DL) or the Bishop's book (B). Related past ‘‘scribe notes’’ from the class that I taught in Paris are given for now, and will be updated with the scribe notes as I get them (if I get some).
Date | Topics | Related chapters Scribbled notes | Scribe notes | Homework milestones |
Sept 4 | Set-up & overview | intro slides lecture1 | Isabela Albuquerque (Fa17) lecture1.pdf source | |
Sept 7 | Probability review | 2.1.1 DL: 3 (nice and gentle) KF: 2.1 (more rigorous) lecture2 | William Léchelle (Fa16) lecture2.pdf source | |
Sept 11 | Parametric models Frequentist vs. Bayesian | 5 lecture3 | Philippe Brouillard and Tristan Deleu (Fa17) lecture3.pdf source | Hwk 1 out (hwk 1 source) |
Sept 14 | Bayesian (cont.) Maximum likelihood | lecture4 | Philippe Brouillard and Tristan Deleu (Fa17) lecture4.pdf | |
Sept 18 | MLE (cont.) Statistical decision theory | 1.3 in Bickel & Doksum Bias-variance tradeoff: 7.3 in Hastie's book lecture5 | Sébastien Lachapelle (Fa17) lecture5.pdf source | |
Sept 21 | Properties of estimators | lecture6 | same as lecture5 | |
Sept 25 | Linear regression Logistic regression | DL: 7.1 (l2, l1-reg.) 6, 7 lecture7 | Zakaria Soliman (Fa16) lecture6.pdf source | Hwk 1 due Hwk 2 out (hwk 2 source) data.zip |
Sept 28 | Optimization Logistic regression (cted) + IRLS | 7 DL: 4.3 Boyd's book lecture8 | MVA lecture2 | |
Oct 2 | Gen. classification (Fisher) Derivative tricks for Gaussian MLE Kernel trick (skipped) K-means | Matrix Diff. book 10, 11 old lecture9 (kernel trick) lecture9 | MVA lecture3 | |
Oct 5 | GMM and EM | 10,11 lecture10 | ||
Oct 9 | Graph theory Directed graphical models | 2 lecture11 | MVA lecture4 | Hwk 2 due Hwk 3 out (hwk 3 source) data.zip |
Oct 12 | DGM (cont.) Undirected graphical models | 2 lecture12 | ||
Oct 16 | UGM (cont.) Inference: elimination alg. | 3 lecture13 | ||
Oct 19 | Sum-product alg. Max-product junction tree | 4, 17 lecture14 | MVA lecture7 | |
Oct 23 | Break: look at projects | |||
Oct 26 | Break: look at projects | |||
Oct 30 | HMM and EM | 12 lecture15 | Hwk 3 due Hwk 4 out (hwk 4 source) | |
Nov 2 | Information theory Max entropy Duality | 19 lecture16 | MVA lecture5 | |
Nov 6 | MaxENT duality Exponential families | 8 (KL geometry: old lecture16) lecture17 | MVA lecture6 MVA lecture8 | Project: team formed |
Nov 9 | Sampling | 21 lecture18 | ||
Nov 13 | MCMC sampling | 21 (variance reduction: see old lecture18) lecture19 | ||
Nov 16 (Gabriel's lecture) | Non-parametric models: Gaussian processes Dirichlet processes | 25 lecture20 slides GP book DP tutorial | ||
Nov 20 | MCMC (cont.) Gibbs sampling | lecture21 | Hwk 4 due Hwk 5 out (hwk 5 source) | |
Nov 23 | Variational methods Estimation in graphical models | Bishop: 10.1 lecture22 9 (skipped parts of: old lecture22) | ||
Nov 27 | Bayesian methods Model selection | 5, 26 lecture23 | MVA lecture10 | Project: 1 page progress report due |
Nov 30 | Gaussian networks Factor analysis, PCA, CCA (Kalman filter) VAE | old lecture24 13 old lecture17 Fa2016 14 , (15) old lecture18 Fa2016 VAE – DL: 20.10.3 | ||
Dec 4 | No lecture this week (NIPS) work on your project! | |||
Dec 11 | Poster presentation 1:00pm-4:00pm mezzanine of Jean-Coutu atrium | Hwk 5 due Take-home final out | ||
Dec 19 | Project report due Take-home final due | |||