## Gated Softmax Classification:

## How to train a mixture of 100.000.000.000.000.000.000.000 logistic regressors

The Gated Softmax model [pdf, NIPS2010] is
a log-*bi*linear classifier that classifies inputs by integrating
over binary latent "style" variables. The integration can be
performed in closed form, and learning by unconstrained, gradient-based
optimization.
Integrating over style variables allows the model to deal with
invariances elegantly, by simply *learning* about them
from training data.

It can be shown that a model with K latent variables is
**equivalent to a mixture of 2^K logistic regression models.**
Weight-sharing
prevents the number of parameters from blowing up. This makes it
possible to train a mixture of about 100.000.000.000.000.000.000.000 linear
classifiers and apply it to test data in closed form. An implementation
of the model in Python is provided below.

### Code

The following two Python modules implement two versions of the model.
Both modules make use of GPUs via V. Mnih's cudamat package
(linked below).

gatedSoftmaxCuda.py

The basic, "unfactored" model.

gatedSoftmaxFactoredCuda.py

The "factored" model, whose parameter tensor is represented by
low-rank matrices.
This makes it possible to represent invariances using shared basis
functions as described in the paper.

Prerequisites: numpy,
cudamat.

The bottom of each file (the __name__=='__main__' clause) contains
example code that instantiates and applies the models to dummy data.
### Errata

The gradient in the NIPS 2010 paper
contains a bug, which is corrected here.
### References

2010 Memisevic, R. Zach, C., Hinton, G., Pollefeys M.

** Gated Softmax Classification **

Neural Information Processing Systems (NIPS) 2010.
[pdf]