How to train a mixture of 100.000.000.000.000.000.000.000 logistic regressors
The Gated Softmax model [pdf, NIPS2010] is
a log-bilinear classifier that classifies inputs by integrating
over binary latent "style" variables. The integration can be
performed in closed form, and learning by unconstrained, gradient-based
Integrating over style variables allows the model to deal with
invariances elegantly, by simply learning about them
from training data.
It can be shown that a model with K latent variables is
equivalent to a mixture of 2^K logistic regression models.
prevents the number of parameters from blowing up. This makes it
possible to train a mixture of about 100.000.000.000.000.000.000.000 linear
classifiers and apply it to test data in closed form. An implementation
of the model in Python is provided below.
The following two Python modules implement two versions of the model.
Both modules make use of GPUs via V. Mnih's cudamat package
The "factored" model, whose parameter tensor is represented by
This makes it possible to represent invariances using shared basis
functions as described in the paper.