r67 - 17 Oct 2007 - 15:59:27 - HugoLarochelleYou are here: TWiki >  Public Web  > NeuronFiltersGallery

Neuron Filters Gallery

Here are some images of the filters (weights) learned by the neurons of the first hidden layer of different deep networks. These features were learned from 10000 MNIST digit images, randomly drawn from the original MNIST with replacement (don't ask why wink ).

These images where obtained using Matlab's image function. Given a matrix M where each row corresponds to the input weights of a neuron, I used the following command to obtain the following images:

i=begin; for j=1:128; subplot(8,16,j); axis off; box on; image(reshape((M(i+j,:)+a)/(2*a)*64,d1,d2)'); colormap('gray');axis off; box on; end;

which takes the neurons at position begin + 1 up to begin + 128 and plots their input weights. Black means a value of the weight that is smaller or equal to -a, white means a weight that is greater or equal to a, and gray means a weight 0. The weights are drawn in d1 by d2 pixel images (for the MNIST images, d1 and d2 are 28).

Some test errors are also reported in the following sections, but should be taken with a grain of salt. Indeed, very few hyper-parameters were tested for the different architectures. Also the test error was evaluated on only 2000 samples drawn from the MNIST data and, again, the sampling was done with replacements. This makes the estimated error optimistic, since there are some samples in the test set that are also present in the training set (again, don't ask why this was done...).

DBN with binomial input units

Here, a is equal to 3, as usually used by Hinton. The DBN had 500 units in the first and second hidden layer, and 2000 units in the third layer. It obtains 1.8% classification error on a 2000 samples drawn from the MNIST data (again, with replacement, so this error is optimistic):

dbn_hid1_subset_corrected.png

For a eps format of this figure: dbn_hid1_subset_corrected.eps

DBN with truncated exponential units

Here, a is equal to 21. This value was chosen because it is such that $sigm(3) \approx trunc\_exp(21)$. The DBN had 500 units in the first and second hidden layer, and 2000 units in the third layer. It obtains 2.25% on the 2000 samples test set used in the previous section:

dbn_hid1_trunc_exp_subset_corrected.png

For a eps format of this figure: dbn_hid1_trunc_exp_subset_corrected.eps

Here is the same figure, wiht a equal to 3:

dbn_hid1_trunc_exp_subset_scale2_corrected.png

For a eps format of this figure: dbn_hid1_trunc_exp_subset_scale2_corrected.eps

DBN with Gaussian units

Here, a is equal to 23. This value was chosen because it roughly corresponds to the maximum absolute value over all weights. The DBN had 500 units in the first and second hidden layer, and 2000 units in the third layer. It obtains 2.5% on the 2000 samples test set used in the previous sections:

dbn_hid1_gauss_subset_corrected.png

For a eps format of this figure: dbn_hid1_gauss_subset_corrected.eps

Here is the same figure, wiht a equal to 3:

dbn_hid1_gauss_subset_scale2_corrected.png

For a eps format of this figure: dbn_hid1_gauss_subset_scale2_corrected.eps

Stacked Autoassociator with binomial units

Here, a is equal to 3. The SAA had 700 units in the first hidden layer, 1000 in the second and 2000 in the third. It obtains 2.2% on the 2000 samples test set used in the previous sections:

saa_hid1_lnd_0_subset_corrected.png

For a eps format of this figure: saa_hid1_lnd_0_subset_corrected.eps

Stacked Autoassociator with multinomial (softmax) units

Here, a is equal to 3. The SAA had 1500 units in the first and second hidden layer, which correspond to 100 concatenated multinomial units of size 15. The third layer corresponds to 2000 binomial units. It obtains 2.8% on the 2000 samples test set used in the previous sections:

saa_hid1_softmax_15_subset_corrected.png

For a eps format of this figure: saa_hid1_softmax_15_subset_corrected.eps

Standard Neural Network with 3 hidden layers

Here, a is equal to 3. The neural net had 500 units in the first and second hidden layer and 2000 units in the third layer. The hidden activation functions are sigmoid functions, just like in a DBN or SAA. A large learning rate (0.1) has been used so that the gradient can propagate well to the first hidden layer. It obtains 3.95% on the 2000 samples test set used in the previous sections:

dbn_hid1_no_greedy_3lay_subset_corrected.png

For a eps format of this figure: dbn_hid1_no_greedy_3lay_subset_corrected.eps

-- HugoLarochelle - 18 June 2007

toggleopenShow attachmentstogglecloseHide attachments
Topic attachments
I Attachment Action Size Date Who Comment
pngpng dbn_hid1_subset.png manage 86.5 K 03 May 2007 - 09:46 HugoLarochelle  
pngpng dbn_hid1_trunc_exp_subset.png manage 83.9 K 03 May 2007 - 09:47 HugoLarochelle  
pngpng saa_hid1_lnd_0_subset.png manage 109.4 K 03 May 2007 - 09:47 HugoLarochelle  
elseeps dbn_hid1_subset.eps manage 233.7 K 03 May 2007 - 09:48 HugoLarochelle  
elseeps dbn_hid1_trunc_exp_subset.eps manage 224.0 K 03 May 2007 - 09:48 HugoLarochelle  
elseeps saa_hid1_lnd_0_subset.eps manage 237.7 K 03 May 2007 - 09:48 HugoLarochelle  
pngpng dbn_hid1_no_greedy_3lay_subset.png manage 77.1 K 09 May 2007 - 13:37 HugoLarochelle  
pngpng dbn_hid1_no_greedy_subset.png manage 66.5 K 09 May 2007 - 13:37 HugoLarochelle  
elseeps saa_hid1_softmax_15_subset.eps manage 231.7 K 09 May 2007 - 13:45 HugoLarochelle  
elseeps dbn_hid1_no_greedy_3lay_subset.eps manage 228.5 K 09 May 2007 - 13:46 HugoLarochelle  
elseeps dbn_hid1_no_greedy_subset.eps manage 226.7 K 09 May 2007 - 13:46 HugoLarochelle  
pngpng dbn_hid1_trunc_exp_subset_scale2.png manage 138.6 K 11 May 2007 - 17:33 HugoLarochelle  
elseeps dbn_hid1_trunc_exp_subset_scale2.eps manage 220.6 K 11 May 2007 - 17:34 HugoLarochelle  
pngpng dbn_hid1_gauss_subset.png manage 67.6 K 11 May 2007 - 17:34 HugoLarochelle  
elseeps dbn_hid1_gauss_subset.eps manage 226.2 K 11 May 2007 - 17:34 HugoLarochelle  
pngpng dbn_hid1_gauss_subset_scale2.png manage 136.6 K 11 May 2007 - 17:35 HugoLarochelle  
elseeps dbn_hid1_gauss_subset_scale2.eps manage 240.4 K 11 May 2007 - 17:35 HugoLarochelle  
elseeps dbn_hid1_gauss_subset_corrected.eps manage 215.2 K 18 Jun 2007 - 16:29 HugoLarochelle  
pngpng dbn_hid1_gauss_subset_corrected.png manage 56.7 K 18 Jun 2007 - 16:29 HugoLarochelle  
elseeps dbn_hid1_gauss_subset_scale2_corrected.eps manage 240.2 K 18 Jun 2007 - 16:29 HugoLarochelle  
pngpng dbn_hid1_gauss_subset_scale2_corrected.png manage 119.8 K 18 Jun 2007 - 16:29 HugoLarochelle  
elseeps dbn_hid1_no_greedy_3lay_subset_corrected.eps manage 222.1 K 18 Jun 2007 - 16:30 HugoLarochelle  
pngpng dbn_hid1_no_greedy_3lay_subset_corrected.png manage 67.5 K 18 Jun 2007 - 16:30 HugoLarochelle  
elseeps dbn_hid1_subset_corrected.eps manage 221.7 K 18 Jun 2007 - 16:30 HugoLarochelle  
pngpng dbn_hid1_subset_corrected.png manage 69.2 K 18 Jun 2007 - 16:30 HugoLarochelle  
elseeps dbn_hid1_trunc_exp_subset_corrected.eps manage 217.2 K 18 Jun 2007 - 16:31 HugoLarochelle  
pngpng dbn_hid1_trunc_exp_subset_corrected.png manage 66.0 K 18 Jun 2007 - 16:31 HugoLarochelle  
elseeps dbn_hid1_trunc_exp_subset_scale2_corrected.eps manage 220.0 K 18 Jun 2007 - 16:31 HugoLarochelle  
pngpng dbn_hid1_trunc_exp_subset_scale2_corrected.png manage 123.9 K 18 Jun 2007 - 16:31 HugoLarochelle  
elseeps saa_hid1_lnd_0_subset_corrected.eps manage 229.4 K 18 Jun 2007 - 16:32 HugoLarochelle  
pngpng saa_hid1_lnd_0_subset_corrected.png manage 86.1 K 18 Jun 2007 - 16:32 HugoLarochelle  
elseeps saa_hid1_softmax_15_subset_corrected.eps manage 215.9 K 18 Jun 2007 - 16:32 HugoLarochelle  
pngpng saa_hid1_softmax_15_subset.png manage 97.7 K 18 Jun 2007 - 16:32 HugoLarochelle  
pngpng saa_hid1_softmax_15_subset_corrected.png manage 74.1 K 18 Jun 2007 - 16:35 HugoLarochelle  
Edit | WYSIWYG | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r67 < r66 < r65 < r64 < r63 | More topic actions
Public.NeuronFiltersGallery moved from Neurones.NeuronFiltersGallery on 21 Jun 2007 - 14:40 by DumitruErhan - put it back
 
Home
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback