Previous topic

Le défi d’entraîner des réseaux de neurones profonds

Next topic

Probabilistic models for deep architectures

This Page

Version française

Denoising autoencoders vs. ordinary autoencoders

A denoising autoencoder is like an ordinary autoencoder, with the difference that during learning, the input seen by the autoencoder is not the raw input but a stochastically corrupted version. A denoising autoencoder is thus trained to reconstruct the original input from the noisy version. For more information see the article from ICML 2008: Denoising Auto-Encoders.

Principal differences between ordinary autoencoders and denoising autoencoders:

Comparison of ordinary/denoising autoencoder
Aspect Ordinary autoencoders Denoising autoencoders
What it does Finds a compact representation Capture the joint distribution of the inputs
Learning criterion Deterministic Stochastic
Number of hidden units Must be limited to avoid learning the identity function As many as are necessary for capturing the distribution
How to choose model capacity (i.e. the number of hidden units) Impossible using standard reconstruction error, since it will always be lower with more hidden units Can use the mean reconstruction error
Choosing the number of learning iterations Impossible using reconstruction error: use classification error after supervised fine-tuning Can do early stopping using the mean reconstruction error
Choosing the amount of corrupting noise not applicable Cannot use reconstruction error: use classification error after supervised fine-tuning