r69 - 10 May 2010 - 15:10:54 - PascalVincentYou are here: TWiki >  Public Web  > AdaptiveSizeConvolutionProject
-- XavierMuller - 28 Apr 2010

Info OK

The source code is in mercurial at ssh://hg@gershwin/Adapt_Size_Convol The state of the art results are in the CIFAR10 google document.

Tasks

  • Document state of the art on CIFAR10
  • Run present code with tanh, sigmoid
  • Run present code with full convolution (zero padded edges)
  • Run present code with all filters set to maximum size
  • Run code with various group sparsity penalties

Ideas and discussion

05/03/2010

  • Run tests with L1 norm for baseline
  • Run tests with new "clumping penality" given by $\displaystyle \lambda\sum_{i=1}^{n}{\frac{|w_i|}{\displaystyle 1+\beta\sqrt{\sum_{j\in \mathcal{V}(i)}{w_j^2}}}}$

References

Results

MNIST

  • Experiment 1 : Single size filters with 'full' mode convolution with tanh non-linearity
    • finetune rate=0.1,
    • kernel size = (14,14) ,
    • number of kernels = 40
    • mlp size=600,
    • batch size = 20,
    • max_pool_layers = [4,4]
    • Result : epoch 14 / Train : 0.008003 / Valid : 1.072144 / Test : 0.831663 / Cost : 0.000018 %
  • Experiment 2 : Reproduce best results of the initial implementation but tanh instead of sin non-linearity

  • Experiment 3 : Single size filters with 'full' mode convolution with tanh non-linearity with group sparsity (smaller groups included in larger groups)
    • finetune rate=0.1,
    • kernel size = (17,17) ,
    • number of kernels = 30
    • mlp size=200,
    • batch size = 20,
    • max_pool_layers = [4,4]
    • num of groups = 8
    • L2 penalty = [0.5,0.2,0.1,0.01,0,0,0,0]
    • sub-group : [[ 7:9:, 7:9], [ 6:10:, 6:10], [ 5:11:, 5:11], [ 4:12:, 4:12], [ 3:13:, 3:13], [ 2:14:, 2:14], [ 1:15:, 1:15], [ 0:16:, 0:16]]
    • L1 penalty = 0.2
    • Result :epoch 14 / Train : 0.008003 / Valid : 1.042084 / Test : 0.731463 / Cost : 0.023707 % (bat mon modele avec plusieurs taille de filtre)

  • Experiment 4 : idem as experiment 3 but without sparsity
    • Result :epoch 9 / Train : 0.016006 / Valid : 1.082164 / Test : 0.821643 / Cost : 0.029048 %

  • Experiment 5 : Single size filters with 'full' mode convolution with tanh non-linearity with group sparsity (smaller groups included in larger groups)
    • finetune rate=0.1,
    • kernel size = (17,17) ,
    • number of kernels = 30
    • mlp size=400,
    • batch size = 20,
    • max_pool_layers = [4,4]
    • num of groups = 8
    • L2 penalty = [0.5,0.2,0.1,0.01,0,0,0,0]
    • sub-group : [[ 7:9:, 7:9], [ 6:10:, 6:10], [ 5:11:, 5:11], [ 4:12:, 4:12], [ 3:13:, 3:13], [ 2:14:, 2:14], [ 1:15:, 1:15], [ 0:16:, 0:16]]
    • L1 penalty = 0.2
    • Result : epoch 12 / Train : 0.012005 / Valid : 1.062124 / Test : 0.831663 / Cost : 0.027216 %

  • Experiment 6 : idem as experiment 5 but without sparsity
    • Result : epoch 27 / Train : 0.000000 / Valid : 0.971944 / Test : 0.801603 / Cost : 0.010622 %

  • Experiment 7 : Single size filters with 'full' mode convolution with tanh non-linearity with group sparsity (smaller groups included in larger groups)
    • finetune rate=0.1,
    • kernel size = (17,17) ,
    • number of kernels = 30
    • mlp size=100,
    • batch size = 20,
    • max_pool_layers = [4,4]
    • num of groups = 8
    • L2 penalty = [0.5,0.2,0.1,0.01,0,0,0,0]
    • sub-group : [[ 7:9:, 7:9], [ 6:10:, 6:10], [ 5:11:, 5:11], [ 4:12:, 4:12], [ 3:13:, 3:13], [ 2:14:, 2:14], [ 1:15:, 1:15], [ 0:16:, 0:16]]
    • L1 penalty = 0.2
    • Result : epoch 18 / Train : 0.000000 / Valid : 0.961924 / Test : 0.851703 / Cost : 0.020192 %

  • Experiment 8 : idem as experiment 7 but without sparsity
    • Result : Coming CIFAR10
Edit | WYSIWYG | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r69 < r68 < r67 < r66 < r65 | More topic actions
 
Home
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback