“Predicting Structured Data”, Edited by Gökhan BakIr, Thomas Hofmann, Bernhard Schölkopf, Alexander J. Smola, Ben Taskar and S.V.N Vishwanathan, MIT Press 2006.
“Advanced Structured Prediction”, Edited by Sebastian Nowozin, Peter V. Gehler, Jeremy Jancsary and Christoph H. Lampert, MIT Press, 2014.
With applications:
“Linguistic Structure Prediction”, Noah Smith, Synthesis Lectures on Human Language Technologies, May 2011.
Sebastian Nowozin, Christoph H. Lampert, “Structured Learning and Prediction in Computer Vision”, Foundations and Trends in Computer Graphics and Vision (FnT CGV), 6(3-4), p. 185-365, 2011
Topics:
generative / discriminative continuum
examples of structured prediction models
Reading:
for next class, you can have a look at the tutorial on energy-based methods by Yann Lecun et al.:
Yann LeCun, Sumit Chopra, Raia Hadsell, Marc'Aurelio Ranzato and
Fu-Jie Huang: “A Tutorial on Energy-Based Learning”, in Bakir, G. and
Hofman, T. and Schölkopf, B. and Smola, A. and Taskar, B. (Eds),
Predicting Structured Data, MIT Press, 2006.
Have a look at sections 1 to 3; 5, 7 and 8. Section 7.1 covers the structured perceptron, structured SVM and CRF that I mentioned in today's class.
Topics:
statistical decision theory setup
energy models
examples: word alignment, image segmentation, OCR
Pointers
See lecture 4 & 5 of my PGM class for a review of statistical decision theory
Sources for examples:
word alignment: “A Discriminative Matching Approach to Word Alignment”. B. Taskar, S. Lacoste-Julien, and D. Klein, EMNLP 2005
image segmentation: “Learning Associative Markov Networks”, B. Taskar, V. Chatalbashev and D. Koller, ICML 2004
OCR: “Max-Margin Markov Networks”, B. Taskar, C. Guestrin and D. Koller, NIPS 2003 (best student paper award)
Topics:
structured prediction losses: perceptron, log-loss (CRF), structured hinge loss (structured SVM)
Pointers
Perceptron loss: Collins, M. “Discriminative training methods for hidden Markov models: theory and experiments with perceptron algorithms”, EMNLP 2002
Log loss (CRF): Lafferty, J., McCallum, A., Pereira, F. “Conditional random fields: Probabilistic models for segmenting and labeling sequence data”, ICML 2001.
Structured hinge loss (structured SVM):
B. Taskar, C. Guestrin and D. Koller, “Max-Margin Markov Networks”, NIPS 2003.
Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann and Yasemin Altun, “Large Margin Methods for Structured and Interdependent Output Variables”, JMLR, 2005.
The classical paper which covers several surrogate losses for binary classification: Bartlett, Peter L., Jordan, Michael I., and McAuliffe, Jon D, “Convexity, classification, and risk bound”, Journal of the American Statistical Association, 101(473):138–156, 2006.
Topics:
structured SVM objective optimization
convex analysis recap; subgradient
landscape of convergence rates
stochastic subgradient method
Pointers
Bible book for (deterministic) rates of convergence for convex optimization: Nesterov, Introductory Lectures on Convex Optimization, 2004.
Proof of convergence for the weighted average stochastic subgradient method: Lacoste-Julien, Schmidt, Bach, “A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method”, arXiv:1212.2002
Topics:
fundamentals of 1st order methods
convergence proof for stochastic subgradient method (both strongly convex or just convex setting)
structured SVM application
Pointers:
First application of stochastic subgradient method for structured SVM: Ratliff, N., Bagnell, J. A., and Zinkevich, M. “(Online) subgradient methods for structured prediction”, AISTATS, 2007.
Weighted average version for structured SVM – see Section 6 in: S. Lacoste-Julien, M. Jaggi, M. Schmidt and P. Pletscher, “Block-Coordinate Frank-Wolfe Optimization for Structural SVMs”, ICML 2013.
Topics:
generic approach using duality to get small QP
saddle-point formulation
examples of efficient loss-augmented inference
Pointers:
Generic formulation using duality on the convex form of the loss-augmented inference: B. Taskar, V. Chatalbashev, D. Koller and C. Guestrin, “Learning Structured Prediction Models: A Large Margin Approach”, ICML 2005.
Saddle-point formulation and more details (e.g. including the word alignment example): B. Taskar, S. Lacoste-Julien, and M. Jordan, “Structured Prediction, Dual Extragradient and Bregman Projections.”, JMLR 2006.
M3-net paper: B. Taskar, C. Guestrin and D. Koller, “Max-Margin Markov Networks”, NIPS 2003.
Topics:
M3-net efficient formulation; marginal polytope; triangulated graphs
constraint generation algorithm (“cutting plane” misnomer)
Pointers:
The LP formulation of MAP inference in MRF is explained in section 13.5 of the Koller & Friedman's book for example
Original constraint generation approach for structured SVM: I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun, “Large Margin Methods for Structured and Interdependent Output Variables”, JMLR 2005.
Improved version (with 1-slack formulation): T. Joachims, T. Finley, Chun-Nam Yu, “Cutting-Plane Training of Structural SVMs”, Machine Learning Journal, 77(1):27-59, 2009.
Topics:
Recap of structured SVM optimization approaches
Lagrangian duality for structured SVM objective
Pointers:
See chapter 5 of Boyd's book for a detailed coverage of Lagragian duality.
Topics:
Properties of primal-dual structured SVM objective
1-slack vs. n-slack constraint generation approach
constraint generation algorithm (“cutting plane” misnomer)
Pointers:
Original constraint generation approach for structured SVM (n-slack): I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun, “Large Margin Methods for Structured and Interdependent Output Variables”, JMLR 2005.
Improved version (with 1-slack formulation): T. Joachims, T. Finley, Chun-Nam Yu, “Cutting-Plane Training of Structural SVMs”, Machine Learning Journal, 77(1):27-59, 2009.
Topics:
Frank-Wolfe algorithm and properties: sparsity, gap, affine invariance
Pointers:
Good modern overview of Frank-Wolfe algorithm with applications in machine learning: M. Jaggi, “Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization”,ICML 2013.
Modern survey of away-step Frank-Wolfe and other variants: S. Lacoste-Julien and M. Jaggi, “On the Global Linear Convergence of Frank-Wolfe Optimization Variants”, NIPS 2015.
A more efficient version of the away-step recently revisited: D. Garber and O. Meshi, “Linear-Memory and Decomposition-Invariant Linearly Convergent Conditional Gradient Algorithm for Structured Polytopes”, NIPS 2016.
Topics:
Convergence proof of Frank-Wolfe algorithm
Application of FW to structured SVM
Pointers:
The standard 2/(t+2) step-size proof for Frank-Wolfe convergence: M. Jaggi, “Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization”,ICML 2013.
Application of Frank-Wolfe to structured SVM: S. Lacoste-Julien, M. Jaggi, M. Schmidt and P. Pletscher, “Block-Coordinate Frank-Wolfe Optimization for Structural SVMs”, ICML 2013.
More technical pointers for the curious people:
The formal proof for the differential equation approach, see Lemma D.5 in: R. Krishnan, S. Lacoste-Julien and D. Sontag, “Barrier Frank-Wolfe for Marginal Inference”, NIPS 2015.
An illustration of the “brute-force” approach with arbitrary step-sizes, but used in the context of convergence of SGD – see proof of THeorem 1 in Appendix A of: F. Bach, E. Moulines, “Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning”, NIPS 2011.
More refined convergence results for generic step-size for the Frank-Wolfe algorithm: RM Freund, P. Gigras, “New Analysis and Results for the Frank-Wolfe Method”, Mathematical Programming 2016.
Topics:
Applying FW variants on structured SVM objective
Relationships: FW on dual equivalent to batch subgradient method on primal; FCFW on dual equivalent to 1-slack cutting plane on primal.
Pointers:
The usual pointer for FW for structured SVM (as in previous lecture): S. Lacoste-Julien, M. Jaggi, M. Schmidt and P. Pletscher, “Block-Coordinate Frank-Wolfe Optimization for Structural SVMs”, ICML 2013.
The linear convergence of FCFW is given in (already cited before): S. Lacoste-Julien and M. Jaggi, “On the Global Linear Convergence of Frank-Wolfe Optimization Variants”, NIPS 2015.
A good application of using FCFW when the linear oracle is expensive is given in this paper: R. Krishnan, S. Lacoste-Julien and D. Sontag, “Barrier Frank-Wolfe for Marginal Inference”, NIPS 2015.
The generalization of the observation that Frank-Wolfe optimization sometimes reduced to the subgradient method on the primal is given in: F. Bach. “Duality between subgradient and conditional gradient methods”. SIAM Journal of Optimization, 25(1):115-129, 2015.
Topics:
Convex polytope
Marginal polytope
Fourier-Motzkin elimination
Affine invariant constant for FW on SVMstruct
Pointers:
Book to learn more about polytopes: Ziegler, Lectures on Polytopes, 1995.
To learn more about the marginal polytopes, see Section 3.4 of: M. J. Wainwright and M. I. Jordan, “Graphical models, exponential families, and variational inference”. Foundations and Trends in Machine Learning, 2008.
Also, see p.80 Figure 4.1 for an example of fractional corners for the local consistency polytope (vs. the marginal polytope which only has integer vertices).
Topics:
Block-coordinate optimization
Block-coordinate Frank-Wolfe (BCFW) applied to structured SVM
Theory basics: decision theory setup
Pointers:
Nesterov coordinate method: Y. Nesterov, “Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems”, SIAM J. Optim., 22(2), 341–362, 2012.
BCFW in all its gory details: S. Lacoste-Julien, M. Jaggi, M. Schmidt and P. Pletscher, “Block-Coordinate Frank-Wolfe Optimization for Structural SVMs”, ICML 2013.
Improvements of BCFW for SVMstruct (non-uniform sampling, away steps, etc.): A. Osokin, J.-B. Alayrac, I. Lukasewitz, P. Dokania and S. Lacoste-Julien, “Minding the Gaps for Block Frank-Wolfe Optimization of Structured SVMs”, ICML 2016.
Code.
Topics:
no free lunch theorem
Occam's generalization error bound
Pointers:
Source for the uniform consistency of the voting rule for binary classification when X is finite:
(in French sorry!): see Theorem 2.1 of the very nice lectures from Sylvain Arlot.
Source for No Free Lunch theorems: Theorem 7.1 and Theorem 7.2 (chapter 7) of Devroye & al., “A Probabilistic Theory of Pattern Recognition”, 1996.
Notes for Occam's bound:
(in French) lecture notes from a class I taught at ENS.
This class was based on very interesting notes from a class taught by David McAllester (the guy behind PAC-Bayes and many other things) – these are in English.
Topics:
PAC-Bayes for structured prediction; probit loss
Pointers:
PAC-Bayes bound from McAllester 2003 was taken from Lemma 4 in: D. McAllester, “Generalization Bounds and Consistency for Structured Labeling”, in Predicting Structured Data, edited by G. Bakir, T. Hofmann, B. Scholkopf, A. Smola, B. Taskar, and S. V. N. Vishwanathan. MIT Press, 2007.
See also proof in his lecture notes that I had linked to last lecture.
Probit loss and its consistency for structured prediction: David McAllester, Joseph Keshet, “Generalization Bounds and Consistency for Latent Structural Probit and Ramp Loss”, (oral), NIPS 2011.
Topics:
Structured prediction surrogate losses recap
Motivation for structured prediction
Generalization error bound: VC dimension, Rademacher complexity
Structured prediction generalization error bounds (factor graph complexity)
Pointers:
VC dimension / Rademacher complexity for binary case: see slides from presentation by John Shawe-Taylor at MLSS 2009.
VC dimension definition: slide 38
generalization error bound for binary classification with VC dimension: slide 46
Rademacher complexity: slide 85
generalization error bound with Rademacher complexity: slide 87
Structured prediction generalization bound: Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Scott Yang, “Structured Prediction Theory Based on Factor Graph Complexity”, NIPS 2016.
Sidenote: relationship between constrained and regularized / penalized formulations:
see section 4.7.3 (Pareto) and 4.7.4 (scalarization) of Boyd's book for the formal relationships
(in French): see exercise 2 from this homework from my ENS class.
Topics:
Consistent surrogate loss for structured prediction
Calibration function
Some task losses are harder than others
Motivation for structured prediction
Generalization error bound: VC dimension, Rademacher complexity
Structured prediction generalization error bounds (factor graph complexity)
Pointers:
Main pointer covered today: Anton Osokin, Francis Bach, Simon Lacoste-Julien, “On Structured Prediction Theory with Calibrated Convex Surrogate Losses”, arXiv:1703.02403.
other pointers:
Canonical paper which presented consistency analysis for binary classification: Bartlett, Peter L., Jordan, Michael I., and McAuliffe, Jon D. “Convexity, classification, and risk bound”, Journal of the American Statistical Association, 101(473):138–156, 2006.
Paper which introduced the terminology for calibration function (binary): Ingo Sweinwart, “How to Compare Different Loss Functions and Their Risks”, Constructive Approximation, 26:225-287, 2007.
paper which showed that multiclass SVM is not consistent (for the 0-1 loss) and proposed a consistent alternative: Lee, Yoonkyung, Lin, Yi, and Wahba, Grace. “Multicategory support vector machines: Theory and application to the classification of microarray data and satellite radiance data”. Journal of the American Statistical Association, 99(465):67–81, 2004.
See also the McAllester 2007 paper: D. McAllester, “Generalization Bounds and Consistency for Structured Labeling”, in Predicting Structured Data, edited by G. Bakir, T. Hofmann, B. Scholkopf, A. Smola, B. Taskar, and S. V. N. Vishwanathan. MIT Press, 2007.
And interestingly, this recent paper shows that the multiclass SVM is consistent for a loss on 3 classes with an “abstain” notion: Ramaswamy, Harish G. and Agarwal, Shivani. “Convex calibration dimension for multiclass loss matrices”. JMLR, 17(14):1–45, 2016.
see also the extensive related work section of the arxiv 2017 paper by Osokin et al.
Topics:
CRF objective and dual; optimization algorithms – online exponentiated gradient
Variance reduction for incremental gradient methods: SAG, SAGA, SVRG, etc.
Pointers:
Online exponentiated gradient for CRF paper: Collins, M., Globerson, A., Koo, T., Carreras, X., and Bartlett, P. L. “Exponentiated gradient algorithms for conditional random fields and max-margin Markov networks”, JMLR, 9:1775-1822, 2008.
Stochastic average gradient (SAG):
Original NIPS 2012 paper: N. Le Roux, M. Schmidt, F. Bach, “A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets.”, NIPS 2012.
Massive journal version: M. Schmidt, N. Le Roux, F. Bach. “Minimizing Finite Sums with the Stochastic Average Gradient” Mathematical Programming, 162:83-162, 2017. (arxiv)
SAGA paper – unbiased version of SAG (with simpler proof) as well as describe the related methods of SDCA and SVRG (will be covered next class; see also references therein for SDCA and SVRG): A. Defazio, F. Bach and S. Lacoste-Julien. “SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives”, NIPS 2014.
For an even more bare bone proof, see: T. Hofmann, A. Lucchi, S. Lacoste-Julien, and Brian McWilliams, “Variance Reduced Stochastic Gradient Descent with Neighbors”, NIPS 2015.
Topics:
Continue variance reduction for SGD methods: SAGA paper
Pointers:
Variance reduction perspective on SAG / SAGA / SVRG – in SAGA paper: A. Defazio, F. Bach and S. Lacoste-Julien. “SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives”, NIPS 2014.
other pointers:
SVRG paper: Rie Johnson and Tong Zhang “Accelerating stochastic gradient descent using predictive variance reduction”, NIPS 2013.
Note that a variant of SVRG that is adaptive to local strong convexity is given in the following paper (where the end of the inner loop is decided randomly: at every inner loop iteration, with probability 1/n, you end the inner loop): T. Hofmann, A. Lucchi, S. Lacoste-Julien, and Brian McWilliams, “Variance Reduced Stochastic Gradient Descent with Neighbors”, NIPS 2015.
The practical aspects of SAG are described in the massive journal version: M. Schmidt, N. Le Roux, F. Bach. “Minimizing Finite Sums with the Stochastic Average Gradient” Mathematical Programming, 162:83-162, 2017. (arxiv)
An alternative to the complicated “lagged updates” when you have sparse features is the Sparse SAGA algorithm; see Section 2 of: R. Leblond, F. Pedregosa and S. Lacoste-Julien, “ASAGA: Asynchronous Parallel SAGA”, AISTATS 2017.
Topics:
Application of SAG to CRFs
Proximal gradient method
General acceleration scheme: catalyst
Non-convex optimization
Pointers:
SAG for CRF paper: M. Schmidt, R. Babanezhad, M.O. Ahmed, A. Defazio, A. Clifton, A. Sarkar, “Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields”, AISTATS 2015.
Proximal gradient method: see slides of this great optimization class by L. Vandenberghe.
Catalyst – meta-algorithm for acceleration: Hongzhou Lin, Julien Mairal, Zaid Harchaoui, “A Universal Catalyst for First-Order Optimization”, NIPS 2015.
Non-convex optimization: see slides from Suvrit Sra at the NIPS 2016 tutorial on “Large-Scale Optimization: Beyond Stochastic Gradient Descent and Convexity” (e.g. table of rates on p. 20 and 22)
Other good optimization pointers:
great coverage of convex optimization by Mark Schmidt at the Machine Learning Summer School in 2015 - slides | video
two great classes on optimization:
EE236C - Optimization Methods for Large-Scale Systems (Spring 2016) - Prof. L. Vandenberghe, UCLA - link
Convex optimization class by Ryan Tibshirani at CMU - Fall 2015 - link
slides on the Frank-Wolfe lecture
tutorial by Francis Bach on SAG et al. at NIPS 2016 – slides
Topics:
Latent structured SVM + CCCP
kernels
RNN and deep learning
Pointers:
latent variable SVMstruct: Chun-Nam Yu, Thorsten Joachims, “Learning Structural SVMs with Latent Variables”, ICML 2009.
others:
hidden CRF: A. Quattoni, S. Wang, L. Morency, M. Collins, and T. Darrell, “Hidden Conditional Random Fields”, TPAMI 2007.
deformable part models for object recognition (highly cited paper): Felzenszwalb, R. Girshick, D. McAllester, D. Ramanan, “Object Detection with Discriminatively Trained Part Based Models”, TPAMI 2010.
CCCP procedure convergence rate: Ian E.H. Yen, Nanyun Peng, Po-Wei Wang and Shou-de Lin, “On Convergence Rate of Concave-Convex Procedure”, NIPS 2012 OPT Workshop (not considered a publication by the way)
kernels:
example of early paper presenting kernels for structured SVM: Juho Rousu, Craig Saunders, Sandor Szedmak, John Shawe-Taylor, “Kernel-Based Learning of Hierarchical Multilabel Classification Models”, JMLR 2006
example of application: L. Bertelli, T. Yu, D. Vu, and B. Gokturk, “Kernelized structural SVM learning for supervised object segmentation”, CVPR 2011.
for computation in BCFW, see Appendix B.5 in the usual: S. Lacoste-Julien, M. Jaggi, M. Schmidt and P. Pletscher, “Block-Coordinate Frank-Wolfe Optimization for Structural SVM”, ICML 2013.
the standard book for kernels: Bernhard Schölkopf and Alexander J. Smola, “Learning with Kernels”, MIT Press 2001
deep learning:
see chapter 10 of the “Deep learning book” for RNNs
head detection plug-in example mentioned in class: Tuan-Hung Vu, Anton Osokin, and Ivan Laptev, “Context-aware CNNs for person head detection”, ICCV 2015
Topics:
Finish RNN and deep learning
Learning to search
Submodular optimization
Pointers:
Encoder-decoder RNN model (seq2seq): see chapter 10.4 of deep learning book.
learning to search: see great ICML 2015 tutorial
LOLS paper: Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daume, John Langford, “Learning to Search Better than Your Teacher”, ICML 2015.
submodularity:
website with tutorials and pointers: http://submodularity.org/
detailed monograph by Francis Bach: F. Bach. “Learning with Submodular Functions: A Convex Optimization Perspective”, Foundations and Trends in Machine Learning, 6(2-3):145-373, 2013 | slides
2:30pm – 5pm in mezzanine of Jean-Coutu building's atrium
Last modified: 2017-04-09