r8 - 04 Jan 2011 - 12:09:51 - DumitruErhanYou are here: TWiki >  Public Web  > DeepVsShallowComparisonICML2007

Preface

This is an online companion for the paper An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation by Hugo Larochelle, Dumitru Erhan, Aaron Courville, James Bergstra and Yoshua Bengio, to appear in the proceedings of the International Conference on Machine Learning (2007). A camera-ready version of this paper can be downloaded from here. This document provides additional information regarding the generation of the datasets, details of our experiments and downloadable versions of the datasets that we used. It overlaps partly with the paper itself, but it is meant to be used in conjunction with the paper in order to get a deeper (pun somewhat intended) understanding of our experiments.

Introduction

Recently, several learning algorithms relying on models with deep architectures have been proposed. Though they have demonstrated impressive performance, to date, they have only been evaluated on relatively simple problems such as digit recognition in a controlled environment, for which many machine learning algorithms already report reasonable results. Here, we present a series of experiments which indicate that these models show promise in solving harder learning problems that exhibit many factors of variation. These models are compared with well established algorithms such as Support Vector Machines and single-layer feed-forward neural networks.

For more details on the motivation of this work, we redirect the reader to the first sections of the paper behind this web page: An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation.

Description of the algorithms

We define a shallow model as a model with very few layers of composition, e.g. linear models, one-hidden layer neural networks and kernel SVMs:

shallow_models.png

On the other hand, deep architecture models are such that their output is the result of the composition of some number of computational units, and the number of needed computational units is not exponential in the characteristics of the problem such as the number of factors of variation or the number of inputs. These units are generally organized in layers so that the many levels of computation can be composed. Also, they can only be approximated appropriately by a shallow model with a large (i.e. possibly exponential) number of computational units. In other words, these models are able to implement very complex functions with relatively few parameters.

Here are the deep architecture models that we considered in our paper. More details about the corresponding training algorithms can be found in the paper and pseudo-codes can be found in the appendix of the technical report for Greedy Layer-Wise Training of Deep Networks.

  • Deep Belief Networks (DBN): Hinton et al. (2006) introduced this generative model with arbitrarily many layers of stochastic neurons and developped a training algorithm for them that is based on a greedy layer-wise generative learning procedure. The training strategy for such networks has been presented and analyzed by Bengio et al. (2007) and they concluded that greedy unsupervised learning is a key ingredient in finding a solution for the problem of training deep networks. While lower layers of a DBN extract “low-level features” from x, the upper layers are supposed to represent more “abstract” concepts that explain the input observation x.

dbn_model.png

  • Stacked Autoassociators: As demonstrated by Bengio et al. (2007), the idea of successively extracting non-linear features that “explain” variations of the features at the previous level can be applied not only to RBMs but also to autoassociators. An autoassociator is simply a model (usually a one-hidden-layer neural network) trained to reproduce its input by forcing the computations to flow through a “bottleneck” representation.

saa_model.png

In the experimental results section, we compare these models with shallow architectures such as single-layer feed-forward networks and Support Vector Machines.

Description of the datasets

In order to study the capacity of these algorithms to scale to learning problems with many factors of variation, we have generated datasets where we can identify some of these factors of variation explicitly. We focused on vision problems, mostly because they are easier to generate and analyze. In all cases, the classification problem has a balanced class distribution.

Variations on MNIST

In one series of experiments, we construct new datasets by adding additional factors of variation to the MNIST images.

The generative process used to generate the datasets is the following:

  • Pick sample $(x,y) \in {\cal X}$ from the MNIST digit recognition dataset (http://yann.lecun.com/exdb/mnist/);
  • Create a perturbed version $\widehat{x}$ of $x$ according to some factors of variation;
  • Add $(\widehat{x},y)$ to new dataset $\widehat{{\cal X}}$;
  • Go back to 1 until enough samples are generated.

Introducing multiple factors of variation leads to the following benchmarks:

  • mnist-rot: the digits were rotated by an angle generated uniformly between 0 and $2 \pi$ radians. Thus the factors of variation are the rotation angle and the factors of variation already contained in MNIST, such as handwriting style:

rotation_examples.png

  • mnist-back-rand: a random background was inserted in the digit image. Each pixel value of the background was generated uniformly between 0 and 255;

mnist_back_random.png

  • mnist-back-image: a patch from a black and white image was used as the background for the digit image. The patches were extracted randomly from a set of 20 images downloaded from the internet. Patches which had low pixel variance (i.e. contained little texture) were ignored;

mnist_back_image.png

  • mnist-rot-back-image: the perturbations used in mnist-rot and mnist-back-image were combined.

mnist_rot_back_image.png

Discrimination between tall and wide rectangles.

In this task, a learning algorithm needs to recognize whether a rectangle contained in an image has a larger width or length. The rectangle can be situated anywhere in the 28 x 28 pixel image. We generated two datasets for this problem:

  • rectangles: the pixels corresponding to the border of the rectangle has a value of 255, the rest are 0. The height and width of the rectangles were sampled uniformly, but when their difference was smaller than 3 pixels the samples were rejected. The top left corner of the rectangles was also sampled uniformly, with the constraint that the whole rectangle fits in the image.

rectangles.png

  • rectangles-image: the border and inside of the rectangles corresponds to an image patch and a background patch is also sampled. The image patches are extracted from one of the 20 images used by mnist-back-image. Sampling of the rectangles is essentially the same as for rectangles, but the area covered by the rectangles was constrained to be between 25% and 75% of the total image, the length and width of the rectangles were forced to be of at least 10 and their difference was forced to be of at least 5 pixels.

rectangles_images.png

Recognition of convex sets

The convex sets consist of a single convex region with pixels of value 255. Candidate convex images were constructed by taking the intersection of a number of half-planes whose location and orientation were chosen uniformly at random. The number of intersecting half-planes was also sampled randomly according to a geometric distribution with parameter 0.195. A candidate convex image was rejected if there were less than 19 pixels in the convex region.

Candidate non-convex images were constructed by taking the union of a random number of convex sets generated as above, but with the number of half-planes sampled from a geometric distribution with parameter 0.07 and with a minimum number of 10 pixels. The number of convex sets was sampled uniformly from 2 to 4. The candidate non-convex images were then tested by checking a convexity condition for every pair of pixels in the non-convex set. Those sets that failed the convexity test were added to the dataset.

The parameters for generating the convex and non-convex sets were balanced to ensure that the mean number of pixels of value 255 is the same in the two datasets.

convex.png

Impact of background pixel correlation

In order to explore the space of learning problems standing between mnist-back-rand and mnist-back-images, we set up an experiment where we could vary the amount of background pixel correlation. We are hence assuming that background correlation is the main characteristic that distinguishes mnist-back-images from mnist-back-random.

Correlated pixel noise was sampled from a zero-mean multivariate Gaussian distribution of dimension equal to the number of pixels: $s \sim \mathcal{N}(0,\Sigma)$. The covariance matrix, $\Sigma$, was defined as a convex combination of an identity matrix and a Gaussian kernel function. Representing the position of the $i$ th pixel with the vector $(x_{i}, y_{i})$, we have:

$\Sigma_{ij} = \gamma I_{x_{i} = x_{j}} I_{y_{i} = y_{j}} + (1-\gamma)e^{-\left(\frac{(x_{i}-x_{j})^{2}}{\sigma^{2}} + \frac{(y_{i}-y_{j})^{2}}{\sigma^2}\right)}$

with kernel bandwidth $\sigma = 6$. The Gaussian kernel induced a neighborhood correlation structure among pixels such that nearby pixels are more correlated that pixels further apart. For each sample from $\mathcal{N}(0,\Sigma)$, the pixel values $p$ (ranging from 0 to 1) were determined by passing elements of $s$ through an error function:

$p_{i} = \frac{2}{\pi} \int_{0}^{s_{i}/\sqrt{2}} e^{-t^2}\ dt $

We generated six datasets with varying degrees of neighborhood correlation by setting the mixture weight $\gamma$ to the values $\{0, 0.2, 0.4, 0.6, 0.8, 1\}$. The marginal distributions for each pixel $p_{i}$ is uniform(0,1) for each value of $\gamma$.

noise_variations.png

Downloadable datasets

All the datasets are provided as zip archives. Each archive contains two files -- a training (and validation) set and a test set. We used the last 2000 examples of the training sets as validation sets in all cases but for rectangles (200) and, in the case of SVMs, retrained the models with the entire set after choosing the optimal parameters on these validation sets. Data is stored at one example per row, the features being space-separated. There are 784 features per example (=28*28 images), corresponding to the first 784 columns of each row. The last column is the label, which is 0 to 9 for the MNIST variations and 1 or 0 for the rectangles, rectangles-images and convex datasets.

IMPORTANT New versions of the datasets containing rotations have been generated. There was an issue in the previous versions with the way rotated digits were generated, which increased the range of values a digit pixel could have. For instance, this issue made it easier to discern digits from the image background in the MNIST rotated+back-image dataset. New results for these datasets have been generated and are reported along with the other benchmark results.

Link to dataset Size File description
MNIST basic 23M packed; 144M + 599M unpacked 12000 train, 50000 test
MNIST + background images 88M packed; 144M + 599M unpacked 12000 train, 50000 test
MNIST + random background 219M packed; 144M + 599M unpacked 12000 train, 50000 test
Rotated MNIST digits 56M packed; 56M + 338M unpacked 12000 train, 50000 test
Old version: Rotated MNIST digits 284M packed; 144M + 599M unpacked 12000 train, 50000 test
Rotated MNIST digits + background images 115M packed; 144M + 599M unpacked 12000 train, 50000 test
Old version: Rotated MNIST digits + background images 124M packed; 144M + 599M unpacked 12000 train, 50000 test
Rectangles 2.7M packed; 15M + 599M unpacked 1200 train, 50000 test
Rectangles images 82M packed; 144M + 599M unpacked 12000 train, 50000 test
Convex-nonconvex sets 3.4M packed; 96M + 599M unpacked 8000 train, 50000 test
MNIST noise variation sets 304M packed; 1008M unpacked 6x12000 train, 6x2000 test

Attached is an archive that contains the scripts needed to generate the datasets, along with a README. Please contact us should you have any questions about the scripts.

Experimental setup and details

We conducted experiments with two deep architecture models: a 3 hidden layer Deep Belief Network (noted DBN-3) and a 3 hidden layer Stacked Autoassociator Network (noted SAA-3). In order to compare their performance, we also trained:

  • a standard single hidden layer feed-forward neural network (noted NNet), to measure the improvement provided by the additional layers and the unsupervised initalization used in DBN-3 and SAA-3
  • a single hidden layer Deep Belief Network (noted DBN-1), to measure the improvement provided by the additional layers used in DBN-3 and SAA-3 only
  • a Support Vector Machine classifier with Gaussian and polynomial kernel, which are popular reference points for classification models

In all cases, the model selection was performed using a validation set. For NNet, the best combination of number of hidden units (varying from 25 to 700), learning rate (from 0.0001 to 0.1) and decrease constant (from 0 to $10^{-6}$) of stochastic gradient descent and weight decay penalization (from 0 to $10^{-5}$) was selected using a grid search.

For DBN-3 and SAA-3, because these models can necessitate more than a day to train, we could not perform a full grid search in the space of hyper-parameters. For both models, the number of hidden units per layer must be chosen, in addition to all other optimization parameters (learning rates for the unsupervised and supervised phases, stopping criteria of the unsupervised phase, etc.). We chose an approximate search procedure that we believed finds a reasonable minima for the following grid search:

Hyper-parameter Size of first hid. layer Size of second hid. layer Size of third hid. layer Learning rate of sup. phase Learning rate of unsup. phase
Range [ 500, 3000 ] [ 500, 4000 ] [ 1000, 6000 ] [ 0.0001, 0.1 ] [ 0.0001, 0.1 ]

The hyper-parameter search procedure we used alternates between fixing a neural network architecture and searching for good optimization hyper-parameters similarly to coordinate descent. More time would usually be spent on finding good optimization parameters, given some empirical evidence that we found indicating that the choice of the optimization hyper-parameters (mostly the learning rates) has much more influence on the obtained performance than the size of the network. We used the same procedure to find the hyper-parameters for DBN-1, which are the same as those of DBN-3 expect the second hidden layer and third hidden layer sizes. We also allowed ourselves to test for much larger first-hidden layer sizes, in order to make the comparison between DBN-1 and DBN-3 fairer.

We usually started by testing a relatively small architecture (between 500 and 700 units in the first and second hidden layer, and between 1000 and 2000 hidden units in the last layer). Given the results obtained on the validation set (compared to those of NNet for instance) after selecting approriate optimization parameters, we would then consider growing the number of units in all layers simultaneously. The biggest networks we eventually tested had up to 3000, 4000 and 6000 hidden units in the first, second and third hidden layers respectively.

As for the optimization hyper-parameters, we would proceed by first trying a few combinations of values for the stochastic gradient descent learning rate of the supervised and unsupervised phases (usually between 0.1 and 0.0001). We then refine the choice of tested values for these hyper-parameters. The first trials would simply give us a trend on the validation set error for these parameters (is a change in the hyper-parameter making things worse of better) and we would then consider that information in selecting appropriate additional trials. One could choose to use learning rate adaptation techniques (e.g. slowly decreasing the learning rate or using momentum) but we did not find these techniques to be crucial.

For all neural networks, we used early stopping based on the error of the model on the validation set. If for 5 consecutive epochs we don't improve on the best validation error, then training is stopped. As for the unsupervised phase stopping criteria, with SAA-3, we stopped greedily training a layer when the autoassociator reconstruction cost did not improve more than 1% on the training set after an epoch of 10000 samples, or 10% for a training set of 1000 samples. With DBN-3, we did not use early stopping, because the RBM training criterion is not tractable. Instead, we tested 50 or 100 unsupervised learning epochs for each layer and selected the best choice based on the final accuracy of the model on the validation set.

The experiments with the NNet, DBN-1, DBN-3 and SAA-3 models were conducted using the PLearn library, an Open Source C++ library for machine learning which was developed and is actively used in our lab.

In the case of SVMs with Gaussian kernels, we performed a two-stage grid search for the width of the kernel and the soft-margin parameter. In the first stage, we searched through a coarse logarithmic grid ranging from $\sigma = 10^{-7}$ to $1$ and $C = 0.1$ to $10^5$. In the second stage, we performed a more fine-grained search in the vicinity of that tuple $(\sigma,C)$ that gave the best validation error. For example, if the optimal tuple was $(\sigma,C) = (10^{-4},10)$, we would examine tuples from $\{2\cdot 10^{-5},4\cdot 10^{-5},6\cdot 10^{-5},8\cdot 10^{-5},10^{-4},2\cdot 10^{-4},4\cdot 10^{-4},6\cdot 10^{-4},8\cdot 10^{-4}\} \times \{5,10,15\}$. In the case of the polynomial kernel, the strategy was the same, except that we searched through all possible degrees of the polynomial up to 20 (no fine-grained search on this parameter, obviously).

Throughout the experiments we used the publicly available library libSVM version 2.83 (note that in the meantime the library has been updated. The results reported below are those obtained with version 2.83).

Results

The confidence intervals on the mean test error are computed using the following formula: $\hat{\mu} \pm z_{1-\alpha/2} \sqrt{\frac{\hat\mu(1-\hat\mu)}{N}}$, where $\hat{\mu}$ is the estimated test error, $\alpha$ is 0.05 and $N=50000$, with $z_{1-\alpha/2}$ being the inverse zero-mean unit-variance Gaussian CDF at $1-\alpha/2$.

For convenience, we provide two tables of results, one being the transpose of the other. The test errors with (*) are the lowest for a given dataset (or whose margins overlap with the confidence margin of the lowest).

Dataset / algorithm SVM RBF SVM Poly NNet DBN-3 SAA-3 DBN-1
MNIST basic 03.03 0.15% (*) 03.69 0.17% 04.69 0.19% 03.11 0.15% (*) 03.46 0.16% 03.94 0.17%
MNIST rotated 11.11 0.28% 15.42 0.32% 18.11 0.34% 10.30 0.27% (*) 10.30 0.27% (*) 14.69 0.31%
Old version:MNIST rotated 10.38 0.27% (*) 13.61 0.30% 17.62 0.33% 12.30 0.29% 11.43 0.28% 12.11 0.29%
MNIST back-image 22.61 0.37% 24.01 0.37% 27.41 0.39% 16.31 0.32% (*) 23.00 0.37% 16.15 0.32% (*)
MNIST back-random 14.58 0.31% 16.62 0.33% 20.04 0.35% 06.73 0.22% (*) 11.28 0.28% 09.80 0.26%
MNIST rotated+back-image 55.18 0.44% 56.41 0.43% 62.16 0.43% 47.39 0.44% (*) 51.93 0.44% 52.21 0.44%
Old version: MNIST rotated+back-image 32.62 0.41% 37.59 0.42% 42.17 0.43% 28.51 0.40% 24.09 0.37% (*) 31.84 0.41%
Rectangles 02.15 0.13% (*) 02.15 0.13% (*) 07.16 0.23% 02.60 0.14% 02.41 0.13% (*) 04.71 0.19%
Rectangles-images 24.04 0.37% 24.05 0.37% 33.20 0.41% 22.50 0.37% (*) 24.05 0.37% 23.69 0.37%
Convex 19.13 0.34% 19.82 0.35% 32.25 0.41% 18.63 0.34% (*) 18.41 0.34% (*) 19.92 0.35%

Clicking on column headings sorts the columns and thereby provides a ranking of test errors for a given dataset.

Algorithm / dataset MNIST basic MNIST rotated Old version: MNIST rotated MNIST back-image MNIST back-random MNIST rotated+back-image Old version: MNIST rotated+back-image Rectangles Rectangles im Convex
SVM RBF 03.03 0.15% (*) 11.11 0.28% 10.38 0.27% (*) 22.61 0.37% 14.58 0.31% 55.18 0.44% 32.62 0.41% 02.15 0.13% (*) 24.04 0.37% 19.13 0.34%
SVM Poly 03.69 0.17% 15.42 0.32% 13.61 0.30% 24.01 0.37% 16.62 0.33% 56.41 0.43% 37.59 0.42% 02.15 0.13% (*) 24.05 0.37% 19.82 0.35%
NNet 04.69 0.19% 18.11 0.34% 17.62 0.33% 27.41 0.39% 20.04 0.35% 62.16 0.43% 42.17 0.43% 07.16 0.23% 33.20 0.41% 32.25 0.41%
DBN-3 03.11 0.15% (*) 10.30 0.27% (*) 12.30 0.29% 16.31 0.32% (*) 06.73 0.22% (*) 47.39 0.44% (*) 28.51 0.40% 02.60 0.14% 22.50 0.37% (*) 18.63 0.34% (*)
SAA-3 03.46 0.16% 10.30 0.27% (*) 11.43 0.28% 23.00 0.37% 11.28 0.28% 51.93 0.44% 24.09 0.38% (*) 02.41 0.13% (*) 24.05 0.37% 18.41 0.34%
DBN-1 03.94 0.17% 14.69 0.31% 12.11 0.29% 16.15 0.32% (*) 09.80 0.26% 52.21 0.44% 31.84 0.41% 04.71 0.19% 23.69 0.37% 19.92 0.35%

Classification error of SVM RBF, SAA-3 and DBN-3 on MNIST examples with progressively less pixel correlation in the background

Discussion of results

There are several conclusions which can be drawn from these results:

  • Taken together, deep architecture models show globally the best performance. Seven times out of 8, either DBN-3 or SAA-3 are among the best performing models (within the confidence intervals).
  • Four times out of 8 the best accuracy is obtained with a deep architecture model (either DBN-3 or SAA-3). This is especially true in three cases: mnist-back-rand, mnist-back-image and mnist-rot-back-image, where they perform better by a large margin.
  • The improvement provided by deep architecture models is most notable for factors of variation related to background, especially in the case of random background, where DBN-3 almost reaches its performance on mnist-basic. It seems however that not all of the invariances can be learned just as easily---an example is the one of rotation, where the deep architectures do not outperform SVMs.
  • Even though SAA-3 and DBN-3 provide consistent improvement over NNet, these models are still sensitive to hyper-parameter selection. This might explain the surprising similarity of the results for SAA-3 on mnist-back-image and mnist-rot-back-image, even though the former corresponds to an easier learning problem than the latter.
  • It can be seen that, as the amount of background pixel correlation increases, the classification performance of all three algorithms degrade. This indicates that, as the factors of variation become more complex in their interaction with the input space, the relative advantage brought by DBN-3 and SAA-3 diminishes. This observation is preoccupying and implies that learning algorithms such as DBN-3 and SAA-3 will eventually need to be adapted in order to scale to harder, potentially "real life" problem.

Conclusions

We presented a series of experiments which show that deep architecture models tend to outperform other shallow models such as SVMs and single-layer feedforward neural networks. We also analyzed the relationships between the performance of these learning algorithms and certain properties of the problems that we considered. In particular, we provided empirical evidence that these techniques compare favorably to other state-of-the-art learning algorithms on learning problems with many factors of variation, but only up to a certain point where the data distribution becomes too complex and computational constraints become an important issue.

Further reading

  • This web page originated from the work presented in the following paper:

An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation Larochelle, H., Erhan, D., Courville, A., Bergstra, J., Bengio, Y. ICML 2007 pdf

  • This technical report and this book chapter present the philosophy behind deep architecture models and motivate them in the context of Artificial Intelligence, and the technical report explains Restricted Boltzmann Machine, Contrastive Divergence, and Deep Belief Nets in a tutorial way:

Learning Deep Architectures for AI Bengio, Y., Technical report link

Scaling Learning Algorithms towards AI Bengio, Y. and LeCun, Y. Book chapter in "Large-Scale Kernel Machines" pdf

  • This paper introduced Deep Belief Networks as generative models:

A fast learning algorithm for deep belief nets Hinton, G. E., Osindero, S. and Teh, Y. Neural Computation (2006) pdf ps.gz html

  • This paper introduced Deep Belief Networks as a simple way of initializing a deep feed-forward neural network:

To recognize shapes, first learn to generate images Hinton, G. E. Technical Report (2006) pdf

  • This paper presents a more general study of the framework of initializing a deep feed-forward neural network using a greedy layer-wise proceedure:

Greedy Layer-Wise Training of Deep Networks Bengio, Y., Lamblin, P., Popovici, P., Larochelle, H. NIPS 2006 pdf tech-report-pdf

  • This paper presents an application of greedy layer-wise learning of a deep autoassociator for dimensionality reduction:

Reducing the dimensionality of data with neural networks Hinton, G. E. and Salakhutdinov, R. R Science 2006 pdf support-pdf code

  • This paper presents a way to use the greedy layer-wise learning procedure to learn a useful embeding for k nearest neighbor classification:

Learning a Nonlinear Embedding by Preserving Class Neighbourhood Structure Salakhutdinov, R. R. and Hinton, G. E. AISTATS 2007 pdf

  • This paper presents different theoretical results about Restricted Boltzmann Machines (RBMs) and Deep Belief Networks, like the universal approximation property of RBMs:

Representational Power of Restricted Boltzmann Machines and Deep Belief Networks Le Roux, N. and Bengio, Y. Technical Report pdf

  • This paper presents how to generalize Restricted Boltzmann Machines to types of data other than binary using exponential familly distribution:
Exponential Family Harmoniums with an Application to Information Retrieval Welling, M., Rosen-Zvi, M. and Hinton, G. E. NIPS 2004 pdf ps

toggleopenShow attachmentstogglecloseHide attachments
Topic attachments
I Attachment Action Size Date Who Comment
pngpng 3bee3372e2c4d3cfc8654d136e5181f9.png manage 0.5 K 12 Apr 2007 - 22:02 UnknownUser  
pngpng 3058832a9bea31d82ac463a888641428.png manage 0.2 K 12 Apr 2007 - 22:02 UnknownUser  
pngpng 9b08e96a2e975623a4637b78a6feef35.png manage 0.4 K 12 Apr 2007 - 22:02 UnknownUser  
pngpng 76c49f47074cdc0440dd6a64b10d1b78.png manage 0.2 K 12 Apr 2007 - 22:02 UnknownUser  
pngpng 9dd4e461268c8034f5c8564e155c67a6.png manage 0.2 K 12 Apr 2007 - 22:02 UnknownUser  
pngpng dfee5dbf969a089f8c474ffe6510b525.png manage 0.2 K 12 Apr 2007 - 22:06 UnknownUser  
pngpng rotation_examples.png manage 5.0 K 12 Apr 2007 - 22:17 DumitruErhan MNIST rotation samples
pngpng mnist_back_image.png manage 10.0 K 12 Apr 2007 - 22:26 DumitruErhan MNIST background image samples
pngpng mnist_back_random.png manage 13.4 K 12 Apr 2007 - 22:29 DumitruErhan MNIST random background samples
pngpng mnist_rot_back_image.png manage 9.9 K 12 Apr 2007 - 22:31 DumitruErhan MNIST background image + rotation samples
pngpng 2a414a35a4692ae1ae7d737fe826370a.png manage 0.4 K 12 Apr 2007 - 22:36 UnknownUser  
pngpng 7b7f9dbfea05c83784f8b85149852f08.png manage 0.2 K 12 Apr 2007 - 22:36 UnknownUser  
pngpng 73fd5834457fce872faf1a487ea05f3d.png manage 0.2 K 12 Apr 2007 - 22:36 UnknownUser  
pngpng 82b8eaac8135bac0b8aa8849c9d66c74.png manage 0.8 K 12 Apr 2007 - 22:37 UnknownUser  
pngpng d7a3db33415fa7fcd05a295c7a03b3dc.png manage 0.3 K 12 Apr 2007 - 22:38 UnknownUser  
pngpng 21574e63086e26b8dd212986eb44ac85.png manage 0.3 K 12 Apr 2007 - 22:38 UnknownUser  
pngpng 025b3f94d79319f2067156076bf05243.png manage 0.2 K 13 Apr 2007 - 11:06 UnknownUser  
pngpng 8a4bbd153c74655abb7ca04c0fa901d8.png manage 0.2 K 13 Apr 2007 - 11:06 UnknownUser  
pngpng 83878c91171338902e0fe0fb97a8c47a.png manage 0.2 K 13 Apr 2007 - 11:06 UnknownUser  
pngpng e3c2d2f17d79a8bacf0fc9f74406968c.png manage 0.4 K 13 Apr 2007 - 11:06 UnknownUser  
pngpng e06bb06fa36fdfc3277f4707481bcb5e.png manage 0.5 K 13 Apr 2007 - 11:06 UnknownUser  
pngpng ae539dfcc999c28e25a0f3ae65c1de79.png manage 0.2 K 13 Apr 2007 - 11:06 UnknownUser  
pngpng 03c7c0ace395d80182db07ae2c30f034.png manage 0.2 K 13 Apr 2007 - 11:06 UnknownUser  
pngpng c2b9aca2ceb614dfdd00ebb526419adb.png manage 0.7 K 13 Apr 2007 - 11:06 UnknownUser  
pngpng 9b5dc3e9bf4f07d9e509e409783e0a94.png manage 0.8 K 13 Apr 2007 - 11:06 UnknownUser  
pngpng b3593007091365b4389952183c243a3f.png manage 0.4 K 13 Apr 2007 - 11:06 UnknownUser  
pngpng f908a9cbd5ee5374abccf6744cb12875.png manage 0.3 K 13 Apr 2007 - 11:06 UnknownUser  
pngpng c2d9cb6d0269460b038b640862baeb4e.png manage 1.4 K 13 Apr 2007 - 11:06 UnknownUser  
pngpng 865c0c0b4ab0e063e5caa3387c1a8741.png manage 0.2 K 13 Apr 2007 - 11:06 UnknownUser  
pngpng rectangles.png manage 1.0 K 16 Apr 2007 - 01:01 DumitruErhan Tall and wide rectangles dataset
pngpng rectangles_images.png manage 9.5 K 16 Apr 2007 - 01:02 DumitruErhan Tall and wide rectangles with background images
pngpng convex.png manage 1.3 K 16 Apr 2007 - 01:10 DumitruErhan Convex and non-convex images
pngpng noise_variations.png manage 218.3 K 16 Apr 2007 - 01:14 DumitruErhan Background noise variations superimposed on MNIST samples
pngpng correlation_analysis_wbars.png manage 8.3 K 18 Apr 2007 - 16:58 DumitruErhan Correlation analysis w bars
pngpng shallow_models.png manage 35.8 K 19 Apr 2007 - 10:58 HugoLarochelle  
pngpng dbn_model.png manage 18.6 K 19 Apr 2007 - 11:52 HugoLarochelle  
pngpng saa_model.png manage 20.7 K 19 Apr 2007 - 11:52 HugoLarochelle  
pdfpdf icml-2007-camera-ready.pdf manage 360.2 K 08 May 2007 - 11:22 HugoLarochelle ICML 2007 camera-ready version
elsegz scripts_only.tar.gz manage 203.3 K 04 Jan 2011 - 12:07 DumitruErhan Scripts needed to generate the datasets
Edit | WYSIWYG | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r8 < r7 < r6 < r5 < r4 | More topic actions
Public.DeepVsShallowComparisonICML2007 moved from Neurones.DeepVsShallowComparison on 18 Apr 2007 - 21:56 by DumitruErhan - put it back
 
Home
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback