PLearn 0.1
Public Types | Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Protected Member Functions | Static Protected Member Functions | Protected Attributes | Private Member Functions
PLearn::NeighborhoodSmoothnessNNet Class Reference

#include <NeighborhoodSmoothnessNNet.h>

Inheritance diagram for PLearn::NeighborhoodSmoothnessNNet:
Inheritance graph
[legend]
Collaboration diagram for PLearn::NeighborhoodSmoothnessNNet:
Collaboration graph
[legend]

List of all members.

Public Types

typedef PLearner inherited

Public Member Functions

 NeighborhoodSmoothnessNNet ()
virtual ~NeighborhoodSmoothnessNNet ()
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual
NeighborhoodSmoothnessNNet
deepCopy (CopiesMap &copies) const
virtual void build ()
 Finish building the object; just call inherited::build followed by build_()
virtual void forget ()
 *** SUBCLASS WRITING: ***
virtual int outputsize () const
 SUBCLASS WRITING: override this so that it returns the size of this learner's output, as a function of its inputsize(), targetsize() and set options.
virtual TVec< string > getTrainCostNames () const
 *** SUBCLASS WRITING: ***
virtual TVec< string > getTestCostNames () const
 *** SUBCLASS WRITING: ***
virtual void train ()
 *** SUBCLASS WRITING: ***
virtual void setTrainingSet (VMat training_set, bool call_forget=true)
 Declares the training set.
virtual void computeOutput (const Vec &input, Vec &output) const
 *** SUBCLASS WRITING: ***
virtual void computeOutputAndCosts (const Vec &input, const Vec &target, Vec &output, Vec &costs) const
 Default calls computeOutput and computeCostsFromOutputs.
virtual void computeCostsFromOutputs (const Vec &input, const Vec &output, const Vec &target, Vec &costs) const
 *** SUBCLASS WRITING: ***
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.

Static Public Member Functions

static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

Func f
Func f_input_to_hidden
Func test_costf
Func output_and_target_to_cost
int max_n_instances
int nhidden
int nhidden2
int noutputs
real sigma_hidden
real sne_weight
real weight_decay
real bias_decay
real layer1_weight_decay
real layer1_bias_decay
real layer2_weight_decay
real layer2_bias_decay
real output_layer_weight_decay
real output_layer_bias_decay
real direct_in_to_out_weight_decay
real classification_regularizer
string penalty_type
bool L1_penalty
bool direct_in_to_out
string output_transfer_func
real interval_minval
real interval_maxval
Array< string > cost_funcs
 a list of cost functions to use in the form "[ cf1; cf2; cf3; ... ]"
PP< Optimizeroptimizer
int batch_size

Static Public Attributes

static StaticInitializer _static_initializer_

Protected Member Functions

void initializeParams ()

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares this class' options.

Protected Attributes

Var input
Var target
Var sampleweight
Var w1
Var w2
Var wout
Var wdirect
Var last_hidden
Var output
Var bag_size
Var bag_inputs
Var bag_output
Var bag_hidden
int test_bag_size
Func invars_to_training_cost
VarArray costs
VarArray penalties
Var training_cost
Var test_costs
VarArray invars
VarArray params
Vec paramsvalues
Var p_ij

Private Member Functions

void build_ ()
 **** SUBCLASS WRITING: ****

Detailed Description

Definition at line 53 of file NeighborhoodSmoothnessNNet.h.


Member Typedef Documentation

Reimplemented from PLearn::PLearner.

Definition at line 93 of file NeighborhoodSmoothnessNNet.h.


Constructor & Destructor Documentation

PLearn::NeighborhoodSmoothnessNNet::NeighborhoodSmoothnessNNet ( )
PLearn::NeighborhoodSmoothnessNNet::~NeighborhoodSmoothnessNNet ( ) [virtual]

Definition at line 122 of file NeighborhoodSmoothnessNNet.cc.

{
}

Member Function Documentation

string PLearn::NeighborhoodSmoothnessNNet::_classname_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 94 of file NeighborhoodSmoothnessNNet.cc.

OptionList & PLearn::NeighborhoodSmoothnessNNet::_getOptionList_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 94 of file NeighborhoodSmoothnessNNet.cc.

RemoteMethodMap & PLearn::NeighborhoodSmoothnessNNet::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 94 of file NeighborhoodSmoothnessNNet.cc.

bool PLearn::NeighborhoodSmoothnessNNet::_isa_ ( const Object o) [static]

Reimplemented from PLearn::PLearner.

Definition at line 94 of file NeighborhoodSmoothnessNNet.cc.

Object * PLearn::NeighborhoodSmoothnessNNet::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::Object.

Definition at line 94 of file NeighborhoodSmoothnessNNet.cc.

StaticInitializer NeighborhoodSmoothnessNNet::_static_initializer_ & PLearn::NeighborhoodSmoothnessNNet::_static_initialize_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 94 of file NeighborhoodSmoothnessNNet.cc.

void PLearn::NeighborhoodSmoothnessNNet::build ( ) [virtual]

Finish building the object; just call inherited::build followed by build_()

Reimplemented from PLearn::PLearner.

Definition at line 236 of file NeighborhoodSmoothnessNNet.cc.

References PLearn::PLearner::build(), and build_().

Referenced by setTrainingSet(), and train().

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeighborhoodSmoothnessNNet::build_ ( ) [private]

**** SUBCLASS WRITING: ****

This method should finish building of the object, according to set 'options', in *any* situation.

Typical situations include:

  • Initial building of an object from a few user-specified options
  • Building of a "reloaded" object: i.e. from the complete set of all serialised options.
  • Updating or "re-building" of an object after a few "tuning" options (such as hyper-parameters) have been modified.

You can assume that the parent class' build_() has already been called.

A typical build method will want to know the inputsize(), targetsize() and outputsize(), and may also want to check whether train_set->hasWeights(). All these methods require a train_set to be set, so the first thing you may want to do, is check if(train_set), before doing any heavy building...

Note: build() is always called by setTrainingSet.

Reimplemented from PLearn::PLearner.

Definition at line 245 of file NeighborhoodSmoothnessNNet.cc.

References PLearn::affine_transform(), PLearn::affine_transform_weight_penalty(), PLearn::TVec< T >::append(), bag_hidden, bag_inputs, bag_size, bias_decay, PLearn::binary_classification_loss(), c, PLearn::classification_loss(), classification_regularizer, cost_funcs, costs, PLearn::cross_entropy(), direct_in_to_out, direct_in_to_out_weight_decay, PLearn::dot(), PLearn::exp(), f, f_input_to_hidden, PLearn::hconcat(), initializeParams(), input, PLearn::PLearner::inputsize(), PLearn::PLearner::inputsize_, interval_maxval, interval_minval, invars, invars_to_training_cost, PLearn::invertElements(), L1_penalty, last_hidden, layer1_bias_decay, layer1_weight_decay, layer2_bias_decay, layer2_weight_decay, PLearn::Var::length(), PLearn::lift_output(), PLearn::log(), PLearn::log_softmax(), PLearn::lowerstring(), PLearn::VarArray::makeSharedValue(), max_n_instances, PLearn::minus(), PLearn::multiclass_loss(), PLearn::neg_log_pi(), PLearn::VarArray::nelems(), PLearn::newObject(), nhidden, nhidden2, PLearn::onehot_squared_loss(), output, output_and_target_to_cost, output_layer_bias_decay, output_layer_weight_decay, output_transfer_func, outputsize(), p_ij, params, paramsvalues, penalties, penalty_type, PLDEPRECATED, PLERROR, PLWARNING, PLearn::TVec< T >::push_back(), PLearn::TVec< T >::resize(), sampleweight, sigma_hidden, PLearn::sigmoid(), PLearn::TVec< T >::size(), sne_weight, PLearn::softmax(), PLearn::softplus(), PLearn::square(), PLearn::stable_cross_entropy(), PLearn::subMat(), PLearn::sum(), PLearn::sumabs(), PLearn::sumsquare(), PLearn::tanh(), target, PLearn::PLearner::targetsize(), PLearn::PLearner::targetsize_, test_costf, test_costs, PLearn::times(), PLearn::timesScalar(), training_cost, PLearn::transposeProduct(), PLearn::unfoldedFunc(), PLearn::var(), w1, w2, wdirect, weight_decay, PLearn::PLearner::weightsize_, PLearn::Var::width(), and wout.

Referenced by build().

{
    /*
     * Create Topology Var Graph
     */

    // Don't do anything if we don't have a train_set
    // It's the only one who knows the inputsize and targetsize anyway...

    if(inputsize_>=0 && targetsize_>=0 && weightsize_>=0)
    {

        // init. basic vars
        int true_inputsize = inputsize(); // inputsize is now true inputsize 
        bag_inputs = Var(max_n_instances, inputsize() + 1);
        // The input (with pij) is the first column of the bag inputs.
        Var input_and_pij = subMat(bag_inputs, 0, 0, 1, bag_inputs->width());
        input = new SubMatTransposeVariable(input_and_pij, 0, 0, 1, true_inputsize);
        output = input;
        params.resize(0);

        // first hidden layer
        if(nhidden>0)
        {
            w1 = Var(1 + true_inputsize, nhidden, "w1");      
            output = tanh(affine_transform(output,w1));
            params.append(w1);
            last_hidden = output;
        }

        // second hidden layer
        if(nhidden2>0)
        {
            w2 = Var(1+nhidden, nhidden2, "w2");
            output = tanh(affine_transform(output,w2));
            params.append(w2);
            last_hidden = output;
        }

        if (nhidden==0)
            PLERROR("NeighborhoodSmoothnessNNet:: there must be hidden units!",nhidden2);
      

        // output layer before transfer function

        wout = Var(1+output->size(), outputsize(), "wout");
        output = affine_transform(output,wout);
        params.append(wout);

        // direct in-to-out layer
        if(direct_in_to_out)
        {
            wdirect = Var(true_inputsize, outputsize(), "wdirect");
            output += transposeProduct(wdirect, input);
            params.append(wdirect);
        }

        Var before_transfer_func = output;
   
        /*
         * output_transfer_func
         */
        unsigned int p=0;
        if(output_transfer_func!="" && output_transfer_func!="none")
        {
            if(output_transfer_func=="tanh")
                output = tanh(output);
            else if(output_transfer_func=="sigmoid")
                output = sigmoid(output);
            else if(output_transfer_func=="softplus")
                output = softplus(output);
            else if(output_transfer_func=="exp")
                output = exp(output);
            else if(output_transfer_func=="softmax")
                output = softmax(output);
            else if (output_transfer_func == "log_softmax")
                output = log_softmax(output);
            else if ((p=output_transfer_func.find("interval"))!=string::npos)
            {
                unsigned int q = output_transfer_func.find(",");
                interval_minval = atof(output_transfer_func.substr(p+1,q-(p+1)).c_str());
                unsigned int r = output_transfer_func.find(")");
                interval_maxval = atof(output_transfer_func.substr(q+1,r-(q+1)).c_str());
                output = interval_minval + (interval_maxval - interval_minval)*sigmoid(output);
            }
            else
                PLERROR("In NNet::build_()  unknown output_transfer_func option: %s",output_transfer_func.c_str());
        }

        /*
         * target and weights
         */
      
        target = Var(targetsize()-1, "target");
      
        if(weightsize_>0)
        {
            if (weightsize_!=1)
                PLERROR("NeighborhoodSmoothnessNNet: expected weightsize to be 1 or 0 (or unspecified = -1, meaning 0), got %d",weightsize_);
            sampleweight = Var(1, "weight");
        }

        // checking penalty
        if( L1_penalty )
        {
            PLDEPRECATED("Option \"L1_penalty\" deprecated. Please use \"penalty_type = L1\" instead.");
            L1_penalty = 0;
            penalty_type = "L1";
        }

        string pt = lowerstring( penalty_type );
        if( pt == "l1" )
            penalty_type = "L1";
        else if( pt == "l1_square" || pt == "l1 square" || pt == "l1square" )
            penalty_type = "L1_square";
        else if( pt == "l2_square" || pt == "l2 square" || pt == "l2square" )
            penalty_type = "L2_square";
        else if( pt == "l2" )
        {
            PLWARNING("L2 penalty not supported, assuming you want L2 square");
            penalty_type = "L2_square";
        }
        else
            PLERROR("penalty_type \"%s\" not supported", penalty_type.c_str());

        // create penalties
        penalties.resize(0);  // prevents penalties from being added twice by consecutive builds
        if(w1 && ((layer1_weight_decay + weight_decay)!=0 || (layer1_bias_decay + bias_decay)!=0))
            penalties.append(affine_transform_weight_penalty(w1, (layer1_weight_decay + weight_decay), (layer1_bias_decay + bias_decay), penalty_type));
        if(w2 && ((layer2_weight_decay + weight_decay)!=0 || (layer2_bias_decay + bias_decay)!=0))
            penalties.append(affine_transform_weight_penalty(w2, (layer2_weight_decay + weight_decay), (layer2_bias_decay + bias_decay), penalty_type));
        if(wout && ((output_layer_weight_decay + weight_decay)!=0 || (output_layer_bias_decay + bias_decay)!=0))
            penalties.append(affine_transform_weight_penalty(wout, (output_layer_weight_decay + weight_decay), 
                                                             (output_layer_bias_decay + bias_decay), penalty_type));
        if(wdirect && (direct_in_to_out_weight_decay + weight_decay) != 0)
        {
            if (penalty_type == "L1_square")
                penalties.append(square(sumabs(wdirect))*(direct_in_to_out_weight_decay + weight_decay));
            else if (penalty_type == "L1")
                penalties.append(sumabs(wdirect)*(direct_in_to_out_weight_decay + weight_decay));
            else if (penalty_type == "L2_square")
                penalties.append(sumsquare(wdirect)*(direct_in_to_out_weight_decay + weight_decay));
        }

        // Shared values hack...
        if(paramsvalues && (paramsvalues.size() == params.nelems()))
            params << paramsvalues;
        else
        {
            paramsvalues.resize(params.nelems());
            initializeParams();
        }
        params.makeSharedValue(paramsvalues);

        output->setName("element output");

        f = Func(input, output);
        f_input_to_hidden = Func(input, last_hidden);

        /*
         * costfuncs
         */

        bag_size = Var(1,1);
        bag_hidden = unfoldedFunc(subMat(bag_inputs, 0, 0, bag_inputs.length(), true_inputsize), f_input_to_hidden, false);
        p_ij = subMat(bag_inputs, 1, true_inputsize, bag_inputs->length() - 1, 1);

        // The q_ij function.
        Var hidden_0 = new SubMatTransposeVariable(bag_hidden, 0, 0, 1, bag_hidden->width());
        Var store_hidden(last_hidden.length(), last_hidden.width());
        Var hidden_0_minus_hidden = minus(hidden_0, store_hidden);
        Var k_hidden =
            exp(
                timesScalar(
                    dot(hidden_0_minus_hidden, hidden_0_minus_hidden),
                    var(- 1 / (sigma_hidden * sigma_hidden))
                    )
                );
        Func f_hidden_to_k_hidden(store_hidden, k_hidden);
        Var k_hidden_all =
            unfoldedFunc(
                subMat(
                    bag_hidden, 1, 0, bag_hidden->length() - 1, bag_hidden->width()
                    ),
                f_hidden_to_k_hidden,
                false
                );
        Var one_over_sum_of_k_hidden = invertElements(sum(k_hidden_all));
        Var log_q_ij = log(timesScalar(k_hidden_all, one_over_sum_of_k_hidden));
        Var minus_weight_sum_p_ij_log_q_ij =
            timesScalar(sum(times(p_ij, log_q_ij)), var(-sne_weight));

        int ncosts = cost_funcs.size();  
        if(ncosts<=0)
            PLERROR("In NNet::build_()  Empty cost_funcs : must at least specify the cost function to optimize!");
        costs.resize(ncosts);
      
        for(int k=0; k<ncosts; k++)
        {
            // create costfuncs and apply individual weights if weightpart > 1
            if(cost_funcs[k]=="mse")
                costs[k]= sumsquare(output-target);
            else if(cost_funcs[k]=="mse_onehot")
                costs[k] = onehot_squared_loss(output, target);
            else if(cost_funcs[k]=="NLL") 
            {
                if (output->size() == 1) {
                    // Assume sigmoid output here!
                    costs[k] = cross_entropy(output, target);
                } else {
                    if (output_transfer_func == "log_softmax")
                        costs[k] = -output[target];
                    else
                        costs[k] = neg_log_pi(output, target);
                }
            } 
            else if(cost_funcs[k]=="class_error")
                costs[k] = classification_loss(output, target);
            else if(cost_funcs[k]=="binary_class_error")
                costs[k] = binary_classification_loss(output, target);
            else if(cost_funcs[k]=="multiclass_error")
                costs[k] = multiclass_loss(output, target);
            else if(cost_funcs[k]=="cross_entropy")
                costs[k] = cross_entropy(output, target);
            else if (cost_funcs[k]=="stable_cross_entropy") {
                Var c = stable_cross_entropy(before_transfer_func, target);
                costs[k] = c;
                if (classification_regularizer) {
                    // There is a regularizer to add to the cost function.
                    dynamic_cast<NegCrossEntropySigmoidVariable*>((Variable*) c)->
                        setRegularizer(classification_regularizer);
                }
            }
            else if (cost_funcs[k]=="lift_output")
                costs[k] = lift_output(output, target);
            else  // Assume we got a Variable name and its options
            {
                costs[k]= dynamic_cast<Variable*>(newObject(cost_funcs[k]));
                if(costs[k].isNull())
                    PLERROR("In NNet::build_()  unknown cost_func option: %s",cost_funcs[k].c_str());
                costs[k]->setParents(output & target);
                costs[k]->build();
            }
          
            // take into account the sampleweight
            //if(sampleweight)
            //  costs[k]= costs[k] * sampleweight; // NO, because this is taken into account (more properly) in stats->update
        }

        test_costs = hconcat(costs);

        // Apply penalty to cost.
        // If there is no penalty, we still add costs[0] as the first cost, in
        // order to keep the same number of costs as if there was a penalty.
        Var test_costs_final = test_costs;
        Var first_cost_final = costs[0];
        if (penalties.size() != 0) {
            first_cost_final = sum(hconcat(first_cost_final & penalties));
        }
        if (weightsize_ > 0) {
            test_costs_final = sampleweight * test_costs;
            first_cost_final = sampleweight * first_cost_final;
        }
        // We add the SNE cost.
        // TODO Make sure we optimize the training cost.
        // TODO Actually maybe we should put this before multiplying by sampleweight.
        first_cost_final = first_cost_final + minus_weight_sum_p_ij_log_q_ij;
      
        training_cost = hconcat(first_cost_final & test_costs_final);

/*      if(penalties.size() != 0) {
        if (weightsize_>0)
        // only multiply by sampleweight if there are weights
        training_cost = hconcat(sampleweight*sum(hconcat(costs[0] & penalties))
        & (test_costs*sampleweight));
        else {
        training_cost = hconcat(sum(hconcat(costs[0] & penalties)) & test_costs);
        }
        } 
        else {
        if(weightsize_>0) {
        // only multiply by sampleweight if there are weights
        training_cost = hconcat(costs[0]*sampleweight & test_costs*sampleweight);
        } else {
        training_cost = hconcat(costs[0] & test_costs);
        }
        } */

        training_cost->setName("training_cost");
        test_costs->setName("test_costs");

        if (weightsize_ > 0) {
            invars = bag_inputs & bag_size & target & sampleweight;
        } else {
            invars = bag_inputs & bag_size & target;
        }
        invars_to_training_cost = Func(invars, training_cost);

        invars_to_training_cost->recomputeParents();

        // Other funcs.
        VarArray outvars;
        VarArray testinvars;
        testinvars.push_back(input);
        outvars.push_back(output);
        testinvars.push_back(target);
        outvars.push_back(target);

        test_costf = Func(testinvars, output&test_costs);
        test_costf->recomputeParents();
        output_and_target_to_cost = Func(outvars, test_costs);
        output_and_target_to_cost->recomputeParents();

    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

string PLearn::NeighborhoodSmoothnessNNet::classname ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 94 of file NeighborhoodSmoothnessNNet.cc.

void PLearn::NeighborhoodSmoothnessNNet::computeCostsFromOutputs ( const Vec input,
const Vec output,
const Vec target,
Vec costs 
) const [virtual]

*** SUBCLASS WRITING: ***

This should be defined in subclasses to compute the weighted costs from already computed output. The costs should correspond to the cost names returned by getTestCostNames().

NOTE: In exotic cases, the cost may also depend on some info in the input, that's why the method also gets so see it.

Implements PLearn::PLearner.

Definition at line 720 of file NeighborhoodSmoothnessNNet.cc.

References output_and_target_to_cost.

{
    output_and_target_to_cost->fprop(outputv&targetv, costsv); 
}
void PLearn::NeighborhoodSmoothnessNNet::computeOutput ( const Vec input,
Vec output 
) const [virtual]

*** SUBCLASS WRITING: ***

This should be defined in subclasses to compute the output from the input.

Reimplemented from PLearn::PLearner.

Definition at line 702 of file NeighborhoodSmoothnessNNet.cc.

References f.

{
    f->fprop(inputv,outputv);
}
void PLearn::NeighborhoodSmoothnessNNet::computeOutputAndCosts ( const Vec input,
const Vec target,
Vec output,
Vec costs 
) const [virtual]

Default calls computeOutput and computeCostsFromOutputs.

You may override this if you have a more efficient way to compute both output and weighted costs at the same time.

Reimplemented from PLearn::PLearner.

Definition at line 711 of file NeighborhoodSmoothnessNNet.cc.

References test_costf.

{
    test_costf->fprop(inputv&targetv, outputv&costsv);
}
void PLearn::NeighborhoodSmoothnessNNet::declareOptions ( OptionList ol) [static, protected]

Declares this class' options.

Reimplemented from PLearn::PLearner.

Definition at line 126 of file NeighborhoodSmoothnessNNet.cc.

References batch_size, bias_decay, PLearn::OptionBase::buildoption, classification_regularizer, cost_funcs, PLearn::declareOption(), PLearn::PLearner::declareOptions(), direct_in_to_out, direct_in_to_out_weight_decay, L1_penalty, layer1_bias_decay, layer1_weight_decay, layer2_bias_decay, layer2_weight_decay, PLearn::OptionBase::learntoption, max_n_instances, nhidden, nhidden2, noutputs, optimizer, output_layer_bias_decay, output_layer_weight_decay, output_transfer_func, paramsvalues, penalty_type, sigma_hidden, sne_weight, and weight_decay.

{
    declareOption(ol, "max_n_instances", &NeighborhoodSmoothnessNNet::max_n_instances, OptionBase::buildoption, 
                  "    maximum number of instances (input vectors x_i) allowed\n");

    declareOption(ol, "nhidden", &NeighborhoodSmoothnessNNet::nhidden, OptionBase::buildoption, 
                  "    number of hidden units in first hidden layer (0 means no hidden layer)\n");

    declareOption(ol, "nhidden2", &NeighborhoodSmoothnessNNet::nhidden2, OptionBase::buildoption, 
                  "    number of hidden units in second hidden layer (0 means no hidden layer)\n");

    declareOption(ol, "sne_weight", &NeighborhoodSmoothnessNNet::sne_weight, OptionBase::buildoption, 
                  "    The weight of the SNE cost in the total cost optimized.");

    declareOption(ol, "sigma_hidden", &NeighborhoodSmoothnessNNet::sigma_hidden, OptionBase::buildoption, 
                  "    The bandwidth of the Gaussian kernel used to compute the similarity\n"
                  "    between hidden layers.");

    declareOption(ol, "noutputs", &NeighborhoodSmoothnessNNet::noutputs, OptionBase::buildoption, 
                  "    number of output units. This gives this learner its outputsize.\n"
                  "    It is typically of the same dimensionality as the target for regression problems \n"
                  "    But for classification problems where target is just the class number, noutputs is \n"
                  "    usually of dimensionality number of classes (as we want to output a score or probability \n"
                  "    vector, one per class)");

    declareOption(ol, "weight_decay", &NeighborhoodSmoothnessNNet::weight_decay, OptionBase::buildoption, 
                  "    global weight decay for all layers\n");

    declareOption(ol, "bias_decay", &NeighborhoodSmoothnessNNet::bias_decay, OptionBase::buildoption, 
                  "    global bias decay for all layers\n");

    declareOption(ol, "layer1_weight_decay", &NeighborhoodSmoothnessNNet::layer1_weight_decay, OptionBase::buildoption, 
                  "    Additional weight decay for the first hidden layer.  Is added to weight_decay.\n");
    declareOption(ol, "layer1_bias_decay", &NeighborhoodSmoothnessNNet::layer1_bias_decay, OptionBase::buildoption, 
                  "    Additional bias decay for the first hidden layer.  Is added to bias_decay.\n");

    declareOption(ol, "layer2_weight_decay", &NeighborhoodSmoothnessNNet::layer2_weight_decay, OptionBase::buildoption, 
                  "    Additional weight decay for the second hidden layer.  Is added to weight_decay.\n");

    declareOption(ol, "layer2_bias_decay", &NeighborhoodSmoothnessNNet::layer2_bias_decay, OptionBase::buildoption, 
                  "    Additional bias decay for the second hidden layer.  Is added to bias_decay.\n");

    declareOption(ol, "output_layer_weight_decay", &NeighborhoodSmoothnessNNet::output_layer_weight_decay, OptionBase::buildoption, 
                  "    Additional weight decay for the output layer.  Is added to 'weight_decay'.\n");

    declareOption(ol, "output_layer_bias_decay", &NeighborhoodSmoothnessNNet::output_layer_bias_decay, OptionBase::buildoption, 
                  "    Additional bias decay for the output layer.  Is added to 'bias_decay'.\n");

    declareOption(ol, "direct_in_to_out_weight_decay", &NeighborhoodSmoothnessNNet::direct_in_to_out_weight_decay, OptionBase::buildoption, 
                  "    Additional weight decay for the direct in-to-out layer.  Is added to 'weight_decay'.\n");

    declareOption(ol, "penalty_type", &NeighborhoodSmoothnessNNet::penalty_type,
                  OptionBase::buildoption,
                  "    Penalty to use on the weights (for weight and bias decay).\n"
                  "    Can be any of:\n"
                  "      - \"L1\": L1 norm,\n"
                  "      - \"L1_square\": square of the L1 norm,\n"
                  "      - \"L2_square\" (default): square of the L2 norm.\n");

    declareOption(ol, "L1_penalty", &NeighborhoodSmoothnessNNet::L1_penalty, OptionBase::buildoption, 
                  "    Deprecated - You should use \"penalty_type\" instead\n"
                  "    should we use L1 penalty instead of the default L2 penalty on the weights?\n");

    declareOption(ol, "direct_in_to_out", &NeighborhoodSmoothnessNNet::direct_in_to_out, OptionBase::buildoption, 
                  "    should we include direct input to output connections?\n");

    declareOption(ol, "output_transfer_func", &NeighborhoodSmoothnessNNet::output_transfer_func, OptionBase::buildoption, 
                  "    what transfer function to use for ouput layer? \n"
                  "    one of: tanh, sigmoid, exp, softplus, softmax \n"
                  "    or interval(<minval>,<maxval>), which stands for\n"
                  "    <minval>+(<maxval>-<minval>)*sigmoid(.).\n"
                  "    An empty string or \"none\" means no output transfer function \n");

    declareOption(ol, "cost_funcs", &NeighborhoodSmoothnessNNet::cost_funcs, OptionBase::buildoption, 
                  "    a list of cost functions to use\n"
                  "    in the form \"[ cf1; cf2; cf3; ... ]\" where each function is one of: \n"
                  "      mse (for regression)\n"
                  "      mse_onehot (for classification)\n"
                  "      NLL (negative log likelihood -log(p[c]) for classification) \n"
                  "      class_error (classification error) \n"
                  "      binary_class_error (classification error for a 0-1 binary classifier)\n"
                  "      multiclass_error\n"
                  "      cross_entropy (for binary classification)\n"
                  "      stable_cross_entropy (more accurate backprop and possible regularization, for binary classification)\n"
                  "      lift_output (not a real cost function, just the output for lift computation)\n"
                  "    The first function of the list will be used as \n"
                  "    the objective function to optimize \n"
                  "    (possibly with an added weight decay penalty) \n");
  
    declareOption(ol, "classification_regularizer", &NeighborhoodSmoothnessNNet::classification_regularizer, OptionBase::buildoption, 
                  "    used only in the stable_cross_entropy cost function, to fight overfitting (0<=r<1)\n");

    declareOption(ol, "optimizer", &NeighborhoodSmoothnessNNet::optimizer, OptionBase::buildoption, 
                  "    specify the optimizer to use\n");

    declareOption(ol, "batch_size", &NeighborhoodSmoothnessNNet::batch_size, OptionBase::buildoption, 
                  "    how many samples to use to estimate the avergage gradient before updating the weights\n"
                  "    0 is equivalent to specifying training_set->n_non_missing_rows() \n");
    // TODO Not really, since the matrix given typically has much more rows (KNNVMatrix) than input samples.

    declareOption(ol, "paramsvalues", &NeighborhoodSmoothnessNNet::paramsvalues, OptionBase::learntoption, 
                  "    The learned parameter vector\n");

    inherited::declareOptions(ol);

}

Here is the call graph for this function:

static const PPath& PLearn::NeighborhoodSmoothnessNNet::declaringFile ( ) [inline, static]

Reimplemented from PLearn::PLearner.

Definition at line 144 of file NeighborhoodSmoothnessNNet.h.

:
    static void declareOptions(OptionList& ol);
NeighborhoodSmoothnessNNet * PLearn::NeighborhoodSmoothnessNNet::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::PLearner.

Definition at line 94 of file NeighborhoodSmoothnessNNet.cc.

void PLearn::NeighborhoodSmoothnessNNet::forget ( ) [virtual]

*** SUBCLASS WRITING: ***

(Re-)initializes the PLearner in its fresh state (that state may depend on the 'seed' option) and sets 'stage' back to 0 (this is the stage of a fresh learner!)

A typical forget() method should do the following:

  • initialize the learner's parameters, using this random generator
  • stage = 0;

This method is typically called by the build_() method, after it has finished setting up the parameters, and if it deemed useful to set or reset the learner in its fresh state. (remember build may be called after modifying options that do not necessarily require the learner to restart from a fresh state...) forget is also called by the setTrainingSet method, after calling build(), so it will generally be called TWICE during setTrainingSet!

Reimplemented from PLearn::PLearner.

Definition at line 780 of file NeighborhoodSmoothnessNNet.cc.

References initializeParams(), PLearn::PLearner::stage, and PLearn::PLearner::train_set.

Referenced by setTrainingSet().

{
    if (train_set) initializeParams();
    stage = 0;
}

Here is the call graph for this function:

Here is the caller graph for this function:

OptionList & PLearn::NeighborhoodSmoothnessNNet::getOptionList ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 94 of file NeighborhoodSmoothnessNNet.cc.

OptionMap & PLearn::NeighborhoodSmoothnessNNet::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 94 of file NeighborhoodSmoothnessNNet.cc.

RemoteMethodMap & PLearn::NeighborhoodSmoothnessNNet::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 94 of file NeighborhoodSmoothnessNNet.cc.

TVec< string > PLearn::NeighborhoodSmoothnessNNet::getTestCostNames ( ) const [virtual]

*** SUBCLASS WRITING: ***

This should return the names of the costs computed by computeCostsFromOutputs.

Implements PLearn::PLearner.

Definition at line 578 of file NeighborhoodSmoothnessNNet.cc.

References cost_funcs.

{ 
    return cost_funcs;
}
TVec< string > PLearn::NeighborhoodSmoothnessNNet::getTrainCostNames ( ) const [virtual]

*** SUBCLASS WRITING: ***

This should return the names of the objective costs that the train method computes and for which it updates the VecStatsCollector train_stats.

Implements PLearn::PLearner.

Definition at line 570 of file NeighborhoodSmoothnessNNet.cc.

References cost_funcs.

{
    return (cost_funcs[0]+"+penalty+SNE") & cost_funcs;
}
void PLearn::NeighborhoodSmoothnessNNet::initializeParams ( ) [protected]

Definition at line 729 of file NeighborhoodSmoothnessNNet.cc.

References direct_in_to_out, PLearn::fill_random_normal(), PLearn::PLearner::inputsize(), PLearn::manual_seed(), nhidden, nhidden2, optimizer, PLearn::seed(), PLearn::PLearner::seed_, w1, w2, wdirect, and wout.

Referenced by build_(), and forget().

{
    if (seed_>=0)
        manual_seed(seed_);
    else
        PLearn::seed();

    real delta = 1. / inputsize();

    /*
      if(direct_in_to_out)
      {
      //fill_random_uniform(wdirect->value, -delta, +delta);
      fill_random_normal(wdirect->value, 0, delta);
      //wdirect->matValue(0).clear();
      }
    */
    if(nhidden>0)
    {
        //fill_random_uniform(w1->value, -delta, +delta);
        //delta = 1./sqrt(nhidden);
        fill_random_normal(w1->value, 0, delta);
        if(direct_in_to_out)
        {
            //fill_random_uniform(wdirect->value, -delta, +delta);
            fill_random_normal(wdirect->value, 0, 0.01*delta);
            wdirect->matValue(0).clear();
        }
        delta = 1./nhidden;
        w1->matValue(0).clear();
    }
    if(nhidden2>0)
    {
        //fill_random_uniform(w2->value, -delta, +delta);
        //delta = 1./sqrt(nhidden2);
        fill_random_normal(w2->value, 0, delta);
        delta = 1./nhidden2;
        w2->matValue(0).clear();
    }
    //fill_random_uniform(wout->value, -delta, +delta);
    fill_random_normal(wout->value, 0, delta);
    wout->matValue(0).clear();

    // Reset optimizer
    if(optimizer)
        optimizer->reset();
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::NeighborhoodSmoothnessNNet::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Transforms a shallow copy into a deep copy.

Reimplemented from PLearn::PLearner.

Definition at line 789 of file NeighborhoodSmoothnessNNet.cc.

References bag_hidden, bag_inputs, bag_output, bag_size, cost_funcs, costs, PLearn::deepCopyField(), f, f_input_to_hidden, input, invars, invars_to_training_cost, last_hidden, PLearn::PLearner::makeDeepCopyFromShallowCopy(), optimizer, output, output_and_target_to_cost, p_ij, params, paramsvalues, penalties, sampleweight, target, test_costf, test_costs, training_cost, w1, w2, wdirect, and wout.

Here is the call graph for this function:

int PLearn::NeighborhoodSmoothnessNNet::outputsize ( ) const [virtual]

SUBCLASS WRITING: override this so that it returns the size of this learner's output, as a function of its inputsize(), targetsize() and set options.

Implements PLearn::PLearner.

Definition at line 564 of file NeighborhoodSmoothnessNNet.cc.

References noutputs.

Referenced by build_().

{ return noutputs; }

Here is the caller graph for this function:

void PLearn::NeighborhoodSmoothnessNNet::setTrainingSet ( VMat  training_set,
bool  call_forget = true 
) [virtual]

Declares the training set.

Then calls build() and forget() if necessary. Also sets this learner's inputsize_ targetsize_ weightsize_ from those of the training_set. Note: You shouldn't have to override this in subclasses, except in maybe to forward the call to an underlying learner.

Reimplemented from PLearn::PLearner.

Definition at line 583 of file NeighborhoodSmoothnessNNet.cc.

References build(), forget(), PLearn::PLearner::inputsize_, PLearn::VMat::length(), PLERROR, PLearn::PLearner::targetsize_, PLearn::PLearner::train_set, PLearn::PLearner::weightsize_, and PLearn::VMat::width().

{ 
    // YB: je ne suis pas sur qu'il soit necessaire de faire un build si la LONGUEUR du train_set a change? 
    // les methodes non-parametriques qui utilisent la longueur devrait faire leur "resize" dans train, pas dans build.
    bool training_set_has_changed =
        !train_set
        || train_set->width()      != training_set->width()
        || train_set->length()     != training_set->length()
        || train_set->inputsize()  != training_set->inputsize()
        || train_set->weightsize() != training_set->weightsize()
        || train_set->targetsize() != training_set->targetsize();
    train_set = training_set;

    if (training_set_has_changed && inputsize_<0)
    {
        inputsize_ = train_set->inputsize()-1;
        targetsize_ = train_set->targetsize();
        weightsize_ = train_set->weightsize();
    } else if (train_set->inputsize() != training_set->inputsize()) {
        PLERROR("In NeighborhoodSmoothnessNNet::setTrainingSet - You can't change the inputsize of the training set");
    }
    if (training_set_has_changed || call_forget)
        build(); // MODIF FAITE PAR YOSHUA: sinon apres un setTrainingSet le build n'est pas complete dans un NNet train_set = training_set;
    if (call_forget)
        forget();
}

Here is the call graph for this function:

void PLearn::NeighborhoodSmoothnessNNet::train ( ) [virtual]

*** SUBCLASS WRITING: ***

The role of the train method is to bring the learner up to stage==nstages, updating the stats with training costs measured on-line in the process.

TYPICAL CODE:

  static Vec input;  // static so we don't reallocate/deallocate memory each time...
  static Vec target; // (but be careful that static means shared!)
  input.resize(inputsize());    // the train_set's inputsize()
  target.resize(targetsize());  // the train_set's targetsize()
  real weight;
  
  if(!train_stats)   // make a default stats collector, in case there's none
      train_stats = new VecStatsCollector();
  
  if(nstages<stage)  // asking to revert to a previous stage!
      forget();      // reset the learner to stage=0
  
  while(stage<nstages)
  {
      // clear statistics of previous epoch
      train_stats->forget(); 
            
      //... train for 1 stage, and update train_stats,
      // using train_set->getSample(input, target, weight);
      // and train_stats->update(train_costs)
          
      ++stage;
      train_stats->finalize(); // finalize statistics for this epoch
  }

Implements PLearn::PLearner.

Definition at line 613 of file NeighborhoodSmoothnessNNet.cc.

References batch_size, build(), PLearn::endl(), f, i, invars_to_training_cost, PLearn::PP< T >::isNull(), PLearn::VMat::length(), max_n_instances, PLearn::PLearner::nstages, optimizer, output_and_target_to_cost, params, PLERROR, PLearn::PLearner::report_progress, PLearn::PLearner::stage, PLearn::sumOverBags(), PLearn::SumOverBagsVariable::TARGET_COLUMN_FIRST, test_costf, PLearn::tostring(), PLearn::PLearner::train_set, PLearn::PLearner::train_stats, PLearn::PLearner::verbosity, and PLearn::VMat::width().

{
    // NeighborhoodSmoothnessNNet nstages is number of epochs (whole passages through the training set)
    // while optimizer nstages is number of weight updates.
    // So relationship between the 2 depends whether we are in stochastic, batch or minibatch mode

    if(!train_set)
        PLERROR("In NeighborhoodSmoothnessNNet::train, you did not setTrainingSet");
    
    if(!train_stats)
        PLERROR("In NeighborhoodSmoothnessNNet::train, you did not setTrainStatsCollector");

    if(f.isNull()) // Net has not been properly built yet (because build was called before the learner had a proper training set)
        build();

    int n_bags = -1;
    // We must count the nb of bags in the training set.
    {
        n_bags=0;
        int l = train_set->length();
        PP<ProgressBar> pb;
        if(report_progress)
            pb = new ProgressBar("Counting nb bags in train_set for NeighborhoodSmoothnessNNet", l);
        Vec row(train_set->width());
        int tag_column = train_set->inputsize() + train_set->targetsize() - 1;
        for (int i=0;i<l;i++) {
            train_set->getRow(i,row);
            if (int(row[tag_column]) & SumOverBagsVariable::TARGET_COLUMN_FIRST) {
                // Indicates the beginning of a new bag.
                n_bags++;
            }
            if(pb)
                pb->update(i);
        }
    }

    int true_batch_size = batch_size;
    if (true_batch_size <= 0) {
        // The real batch size is actually the number of bags in the training set.
        true_batch_size = n_bags;
    }

    // We can now compute the total cost.
    Var totalcost = sumOverBags(train_set, invars_to_training_cost, max_n_instances, true_batch_size, true);

    // Number of optimizer stages corresponding to one learner stage (one epoch).
    int optstage_per_lstage = 0;
    if (batch_size<=0) {
        optstage_per_lstage = 1;
    } else {
        optstage_per_lstage = n_bags/batch_size;
    }

    if(optimizer) {
        optimizer->setToOptimize(params, totalcost);  
        optimizer->build();
    }

    PP<ProgressBar> pb;
    if(report_progress)
        pb = new ProgressBar("Training NeighborhoodSmoothnessNNet from stage " + tostring(stage) + " to " + tostring(nstages), nstages-stage);

    int initial_stage = stage;
    bool early_stop=false;
    while(stage<nstages && !early_stop)
    {
        optimizer->nstages = optstage_per_lstage;
        train_stats->forget();
        optimizer->early_stop = false;
        optimizer->optimizeN(*train_stats);
        train_stats->finalize();
        if(verbosity>2)
            cout << "Epoch " << stage << " train objective: " << train_stats->getMean() << endl;
        ++stage;
        if(pb)
            pb->update(stage-initial_stage);
    }
    if(verbosity>1)
        cout << "EPOCH " << stage << " train objective: " << train_stats->getMean() << endl;

    // TODO Not sure if this is needed, but just in case...
    output_and_target_to_cost->recomputeParents();
    test_costf->recomputeParents();

}

Here is the call graph for this function:


Member Data Documentation

Reimplemented from PLearn::PLearner.

Definition at line 144 of file NeighborhoodSmoothnessNNet.h.

Definition at line 70 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 68 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 69 of file NeighborhoodSmoothnessNNet.h.

Referenced by makeDeepCopyFromShallowCopy().

Definition at line 67 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 133 of file NeighborhoodSmoothnessNNet.h.

Referenced by declareOptions(), and train().

Definition at line 109 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and declareOptions().

Definition at line 117 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and declareOptions().

a list of cost functions to use in the form "[ cf1; cf2; cf3; ... ]"

Definition at line 128 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), declareOptions(), getTestCostNames(), getTrainCostNames(), and makeDeepCopyFromShallowCopy().

Definition at line 74 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 121 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), declareOptions(), and initializeParams().

Definition at line 116 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and declareOptions().

Definition at line 87 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 58 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 123 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_().

Definition at line 123 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_().

Definition at line 78 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 72 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), makeDeepCopyFromShallowCopy(), and train().

Definition at line 120 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and declareOptions().

Definition at line 65 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 111 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and declareOptions().

Definition at line 110 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and declareOptions().

Definition at line 113 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and declareOptions().

Definition at line 112 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and declareOptions().

Definition at line 99 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), declareOptions(), and train().

Definition at line 101 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), declareOptions(), and initializeParams().

Definition at line 102 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), declareOptions(), and initializeParams().

Definition at line 103 of file NeighborhoodSmoothnessNNet.h.

Referenced by declareOptions(), and outputsize().

Definition at line 66 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 115 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and declareOptions().

Definition at line 114 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and declareOptions().

Definition at line 122 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and declareOptions().

Definition at line 82 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 79 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), makeDeepCopyFromShallowCopy(), and train().

Definition at line 75 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 119 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and declareOptions().

Definition at line 60 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 105 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and declareOptions().

Definition at line 106 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and declareOptions().

Definition at line 59 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 71 of file NeighborhoodSmoothnessNNet.h.

Definition at line 77 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 76 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and makeDeepCopyFromShallowCopy().

Definition at line 108 of file NeighborhoodSmoothnessNNet.h.

Referenced by build_(), and declareOptions().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines