PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Static Protected Member Functions | Protected Attributes | Private Types | Private Member Functions
PLearn::RBMRateLayer Class Reference

Layer in an RBM consisting in rate-coded units. More...

#include <RBMRateLayer.h>

Inheritance diagram for PLearn::RBMRateLayer:
Inheritance graph
[legend]
Collaboration diagram for PLearn::RBMRateLayer:
Collaboration graph
[legend]

List of all members.

Public Member Functions

 RBMRateLayer (real the_learning_rate=0.)
 Default constructor.
virtual void generateSample ()
 generate a sample, and update the sample field
virtual void generateSamples ()
 batch version
virtual void computeExpectation ()
 compute the expectation
virtual void computeExpectations ()
 batch version
virtual void fprop (const Vec &input, Vec &output) const
 forward propagation
virtual void bpropUpdate (const Vec &input, const Vec &output, Vec &input_gradient, const Vec &output_gradient, bool accumulate=false)
 back-propagates the output gradient to the input
virtual void bpropUpdate (const Mat &inputs, const Mat &outputs, Mat &input_gradients, const Mat &output_gradients, bool accumulate=false)
 back-propagates the output gradient to the input, in mini-batch mode
virtual real fpropNLL (const Vec &target)
 Computes the negative log-likelihood of target given the internal activations of the layer.
virtual void bpropNLL (const Vec &target, real nll, Vec &bias_gradient)
 Computes the gradient of the negative log-likelihood of target with respect to the layer's bias, given the internal activations.
virtual real energy (const Vec &unit_values) const
virtual real freeEnergyContribution (const Vec &unit_activations) const
 Computes $ -log(\sum_{possible values of h} exp(h' unit_activations))$ This quantity is used for computing the free energy of a sample x in the OTHER layer of an RBM, from which unit_activations was computed.
virtual void freeEnergyContributionGradient (const Vec &unit_activations, Vec &unit_activations_gradient, real output_gradient=1, bool accumulate=false) const
 Computes gradient of the result of freeEnergyContribution $ -log(\sum_{possible values of h} exp(h' unit_activations))$ with respect to unit_activations.
virtual int getConfigurationCount ()
 Returns a number of different configurations the layer can be in.
virtual void getConfiguration (int conf_index, Vec &output)
 Computes the conf_index configuration of the layer.
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual RBMRateLayerdeepCopy (CopiesMap &copies) const
virtual void build ()
 Post-constructor.
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.

Static Public Member Functions

static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

int n_spikes
 Maximum number of spikes for each neuron.

Static Public Attributes

static StaticInitializer _static_initializer_

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares the class options.

Protected Attributes

Vec tmp_softmax

Private Types

typedef RBMLayer inherited

Private Member Functions

void build_ ()
 This does the actual building.

Detailed Description

Layer in an RBM consisting in rate-coded units.

Definition at line 51 of file RBMRateLayer.h.


Member Typedef Documentation

Reimplemented from PLearn::RBMLayer.

Definition at line 53 of file RBMRateLayer.h.


Constructor & Destructor Documentation

PLearn::RBMRateLayer::RBMRateLayer ( real  the_learning_rate = 0.)

Default constructor.

Definition at line 53 of file RBMRateLayer.cc.

                                                   :
    inherited( the_learning_rate ),
    n_spikes( 10 )
{
}

Member Function Documentation

string PLearn::RBMRateLayer::_classname_ ( ) [static]

Reimplemented from PLearn::RBMLayer.

Definition at line 51 of file RBMRateLayer.cc.

OptionList & PLearn::RBMRateLayer::_getOptionList_ ( ) [static]

Reimplemented from PLearn::RBMLayer.

Definition at line 51 of file RBMRateLayer.cc.

RemoteMethodMap & PLearn::RBMRateLayer::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::RBMLayer.

Definition at line 51 of file RBMRateLayer.cc.

bool PLearn::RBMRateLayer::_isa_ ( const Object o) [static]

Reimplemented from PLearn::RBMLayer.

Definition at line 51 of file RBMRateLayer.cc.

Object * PLearn::RBMRateLayer::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::Object.

Definition at line 51 of file RBMRateLayer.cc.

StaticInitializer RBMRateLayer::_static_initializer_ & PLearn::RBMRateLayer::_static_initialize_ ( ) [static]

Reimplemented from PLearn::RBMLayer.

Definition at line 51 of file RBMRateLayer.cc.

void PLearn::RBMRateLayer::bpropNLL ( const Vec target,
real  nll,
Vec bias_gradient 
) [virtual]

Computes the gradient of the negative log-likelihood of target with respect to the layer's bias, given the internal activations.

Reimplemented from PLearn::RBMLayer.

Definition at line 236 of file RBMRateLayer.cc.

References computeExpectation(), PLearn::RBMLayer::expectation, PLearn::OnlineLearningModule::input_size, PLASSERT, PLERROR, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), PLearn::RBMLayer::size, and PLearn::substract().

{
    PLERROR("In RBMRateLayer::bpropNLL(): not implemented");
    computeExpectation();

    PLASSERT( target.size() == input_size );
    bias_gradient.resize( size );

    // bias_gradient = expectation - target
    substract(expectation, target, bias_gradient);
}

Here is the call graph for this function:

void PLearn::RBMRateLayer::bpropUpdate ( const Vec input,
const Vec output,
Vec input_gradient,
const Vec output_gradient,
bool  accumulate = false 
) [virtual]

back-propagates the output gradient to the input

Implements PLearn::RBMLayer.

Definition at line 147 of file RBMRateLayer.cc.

References PLearn::RBMLayer::applyBiasDecay(), PLearn::RBMLayer::bias, PLearn::RBMLayer::bias_inc, PLearn::TVec< T >::clear(), i, PLearn::RBMLayer::learning_rate, PLearn::RBMLayer::momentum, n_spikes, PLASSERT, PLASSERT_MSG, PLearn::TVec< T >::resize(), PLearn::RBMLayer::size, and PLearn::TVec< T >::size().

{
    PLASSERT( input.size() == size );
    PLASSERT( output.size() == size );
    PLASSERT( output_gradient.size() == size );

    if( accumulate )
    {
        PLASSERT_MSG( input_gradient.size() == size,
                      "Cannot resize input_gradient AND accumulate into it" );
    }
    else
    {
        input_gradient.resize( size );
        input_gradient.clear();
    }

    if( momentum != 0. )
        bias_inc.resize( size );
    
    for( int i=0 ; i<size ; i++ )
    {
        real output_i = output[i];
        real in_grad_i;
        in_grad_i = output_i * (1-output_i) * output_gradient[i] * n_spikes;
        input_gradient[i] += in_grad_i;
        
        if( momentum == 0. )
        {
            // update the bias: bias -= learning_rate * input_gradient
            bias[i] -= learning_rate * in_grad_i;
        }
        else
        {
            // The update rule becomes:
            // bias_inc = momentum * bias_inc - learning_rate * input_gradient
            // bias += bias_inc
            bias_inc[i] = momentum * bias_inc[i] - learning_rate * in_grad_i;
            bias[i] += bias_inc[i];
        }
    }
    applyBiasDecay();
}

Here is the call graph for this function:

void PLearn::RBMRateLayer::bpropUpdate ( const Mat inputs,
const Mat outputs,
Mat input_gradients,
const Mat output_gradients,
bool  accumulate = false 
) [virtual]

back-propagates the output gradient to the input, in mini-batch mode

Implements PLearn::RBMLayer.

Definition at line 194 of file RBMRateLayer.cc.

References PLERROR.

{
    PLERROR("In RBMRateLayer::bpropUpdate(): mini-batch version of bpropUpdate is not "
            "implemented yet");
}
void PLearn::RBMRateLayer::build ( ) [virtual]

Post-constructor.

The normal implementation should call simply inherited::build(), then this class's build_(). This method should be callable again at later times, after modifying some option fields to change the "architecture" of the object.

Reimplemented from PLearn::RBMLayer.

Definition at line 266 of file RBMRateLayer.cc.

References PLearn::RBMLayer::build(), and build_().

Here is the call graph for this function:

void PLearn::RBMRateLayer::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::RBMLayer.

Definition at line 260 of file RBMRateLayer.cc.

References n_spikes, and PLERROR.

Referenced by build().

{
    if( n_spikes < 1 )
        PLERROR("In RBMRateLayer::build_(): n_spikes should be positive");
}

Here is the caller graph for this function:

string PLearn::RBMRateLayer::classname ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 51 of file RBMRateLayer.cc.

void PLearn::RBMRateLayer::computeExpectation ( ) [virtual]

compute the expectation

Implements PLearn::RBMLayer.

Definition at line 98 of file RBMRateLayer.cc.

References PLearn::RBMLayer::activation, PLearn::RBMLayer::expectation, PLearn::RBMLayer::expectation_is_up_to_date, PLearn::fastsigmoid(), i, n_spikes, PLearn::sigmoid(), PLearn::RBMLayer::size, and PLearn::OnlineLearningModule::use_fast_approximations.

Referenced by bpropNLL().

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::RBMRateLayer::computeExpectations ( ) [virtual]
void PLearn::RBMRateLayer::declareOptions ( OptionList ol) [static, protected]

Declares the class options.

Reimplemented from PLearn::RBMLayer.

Definition at line 249 of file RBMRateLayer.cc.

References PLearn::OptionBase::buildoption, PLearn::declareOption(), PLearn::RBMLayer::declareOptions(), and n_spikes.

{

    declareOption(ol, "n_spikes", &RBMRateLayer::n_spikes,
                  OptionBase::buildoption,
                  "Maximum number of spikes for each neuron.\n");

    // Now call the parent class' declareOptions
    inherited::declareOptions(ol);
}

Here is the call graph for this function:

static const PPath& PLearn::RBMRateLayer::declaringFile ( ) [inline, static]

Reimplemented from PLearn::RBMLayer.

Definition at line 127 of file RBMRateLayer.h.

:
    //#####  Not Options  #####################################################
RBMRateLayer * PLearn::RBMRateLayer::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::RBMLayer.

Definition at line 51 of file RBMRateLayer.cc.

real PLearn::RBMRateLayer::energy ( const Vec unit_values) const [virtual]

Reimplemented from PLearn::RBMLayer.

Definition at line 279 of file RBMRateLayer.cc.

References PLearn::RBMLayer::bias, and PLearn::dot().

{
    return -dot(unit_values, bias);
}

Here is the call graph for this function:

void PLearn::RBMRateLayer::fprop ( const Vec input,
Vec output 
) const [virtual]

forward propagation

Reimplemented from PLearn::RBMLayer.

Definition at line 132 of file RBMRateLayer.cc.

References PLearn::RBMLayer::bias, PLearn::fastsigmoid(), i, PLearn::OnlineLearningModule::input_size, n_spikes, PLearn::OnlineLearningModule::output_size, PLASSERT, PLearn::TVec< T >::resize(), PLearn::sigmoid(), PLearn::RBMLayer::size, PLearn::TVec< T >::size(), and PLearn::OnlineLearningModule::use_fast_approximations.

{
    PLASSERT( input.size() == input_size );
    output.resize( output_size );
    if (use_fast_approximations)
        for(int i=0; i<size; i++)
            output[i] = n_spikes*fastsigmoid(input[i]+bias[i]);
    else
        for(int i=0; i<size; i++)
            output[i] = n_spikes*sigmoid(input[i]+bias[i]);
}

Here is the call graph for this function:

real PLearn::RBMRateLayer::fpropNLL ( const Vec target) [virtual]

Computes the negative log-likelihood of target given the internal activations of the layer.

Reimplemented from PLearn::RBMLayer.

Definition at line 206 of file RBMRateLayer.cc.

References PLearn::RBMLayer::activation, i, PLearn::OnlineLearningModule::input_size, PLASSERT, PLERROR, PLearn::TVec< T >::size(), PLearn::RBMLayer::size, PLearn::softplus(), PLearn::tabulated_softplus(), and PLearn::OnlineLearningModule::use_fast_approximations.

{
    PLERROR("In RBMRateLayer::fpropNLL(): not implemented");
    PLASSERT( target.size() == input_size );
    real ret = 0;
    real target_i, activation_i;
    if(use_fast_approximations){
        for( int i=0 ; i<size ; i++ )
        {
            target_i = target[i];
            activation_i = activation[i];
            ret += tabulated_softplus(activation_i) - target_i * activation_i;
            // nll = - target*log(sigmoid(act)) -(1-target)*log(1-sigmoid(act))
            // but it is numerically unstable, so use instead the following identity:
            //     = target*softplus(-act) +(1-target)*(act+softplus(-act))
            //     = act + softplus(-act) - target*act
            //     = softplus(act) - target*act
        }
    } else {
        for( int i=0 ; i<size ; i++ )
        {
            target_i = target[i];
            activation_i = activation[i];
            ret += softplus(activation_i) - target_i * activation_i;
        }
    }

    return ret;
}

Here is the call graph for this function:

real PLearn::RBMRateLayer::freeEnergyContribution ( const Vec unit_activations) const [virtual]

Computes $ -log(\sum_{possible values of h} exp(h' unit_activations))$ This quantity is used for computing the free energy of a sample x in the OTHER layer of an RBM, from which unit_activations was computed.

Reimplemented from PLearn::RBMLayer.

Definition at line 284 of file RBMRateLayer.cc.

References a, PLearn::TVec< T >::data(), i, n_spikes, PLASSERT, PLearn::TVec< T >::size(), PLearn::RBMLayer::size, PLearn::softplus(), PLearn::tabulated_softplus(), and PLearn::OnlineLearningModule::use_fast_approximations.

{
    PLASSERT( unit_activations.size() == size );

    // result = -\sum_{i=0}^{size-1} softplus(a_i)
    real result = 0;
    real* a = unit_activations.data();
    for (int i=0; i<size; i++)
    {
        if (use_fast_approximations)
            result -= n_spikes*tabulated_softplus(a[i]);
        else
            result -= n_spikes*softplus(a[i]);
    }
    return result;
}

Here is the call graph for this function:

void PLearn::RBMRateLayer::freeEnergyContributionGradient ( const Vec unit_activations,
Vec unit_activations_gradient,
real  output_gradient = 1,
bool  accumulate = false 
) const [virtual]

Computes gradient of the result of freeEnergyContribution $ -log(\sum_{possible values of h} exp(h' unit_activations))$ with respect to unit_activations.

Optionally, a gradient with respect to freeEnergyContribution can be given

Reimplemented from PLearn::RBMLayer.

Definition at line 302 of file RBMRateLayer.cc.

References a, PLearn::TVec< T >::clear(), PLearn::TVec< T >::data(), PLearn::fastsigmoid(), i, n_spikes, PLASSERT, PLearn::TVec< T >::resize(), PLearn::sigmoid(), PLearn::TVec< T >::size(), PLearn::RBMLayer::size, and PLearn::OnlineLearningModule::use_fast_approximations.

{
    PLASSERT( unit_activations.size() == size );
    unit_activations_gradient.resize( size );
    if( !accumulate ) unit_activations_gradient.clear();
    real* a = unit_activations.data();
    real* ga = unit_activations_gradient.data();
    for (int i=0; i<size; i++)
    {
        if (use_fast_approximations)
            ga[i] -= output_gradient * n_spikes *
                fastsigmoid( a[i] );
        else
            ga[i] -= output_gradient * n_spikes *
                sigmoid( a[i] );
    }
}

Here is the call graph for this function:

void PLearn::RBMRateLayer::generateSample ( ) [virtual]

generate a sample, and update the sample field

Implements PLearn::RBMLayer.

Definition at line 59 of file RBMRateLayer.cc.

References PLearn::RBMLayer::expectation, PLearn::RBMLayer::expectation_is_up_to_date, i, n_spikes, PLASSERT_MSG, PLCHECK_MSG, PLearn::RBMLayer::random_gen, PLearn::RBMLayer::sample, and PLearn::RBMLayer::size.

{
    PLASSERT_MSG(random_gen,
                 "random_gen should be initialized before generating samples");

    PLCHECK_MSG(expectation_is_up_to_date, "Expectation should be computed "
            "before calling generateSample()");

    real exp_i = 0;
    for( int i=0; i<size; i++)
    {
        exp_i = expectation[i];
        sample[i] = round(random_gen->gaussian_mu_sigma(
                              exp_i,exp_i*(1-exp_i/n_spikes)) );
    }
}
void PLearn::RBMRateLayer::generateSamples ( ) [virtual]

batch version

Implements PLearn::RBMLayer.

Definition at line 76 of file RBMRateLayer.cc.

References PLearn::RBMLayer::batch_size, PLearn::RBMLayer::expectations, PLearn::RBMLayer::expectations_are_up_to_date, i, PLearn::TMat< T >::length(), n_spikes, PLASSERT, PLASSERT_MSG, PLCHECK_MSG, PLearn::RBMLayer::random_gen, PLearn::RBMLayer::samples, PLearn::RBMLayer::size, and PLearn::TMat< T >::width().

{
    PLASSERT_MSG(random_gen,
                 "random_gen should be initialized before generating samples");

    PLCHECK_MSG(expectations_are_up_to_date, "Expectations should be computed "
                        "before calling generateSamples()");

    PLASSERT( samples.width() == size && samples.length() == batch_size );

    real exp_i = 0;
    for (int k = 0; k < batch_size; k++)
    {
        for( int i=0; i<size; i++)
        {
            exp_i = expectations(k,i);
            samples(k,i) = round(random_gen->gaussian_mu_sigma(
                                     exp_i,exp_i*(1-exp_i/n_spikes)) );
        }
    }
}

Here is the call graph for this function:

void PLearn::RBMRateLayer::getConfiguration ( int  conf_index,
Vec output 
) [virtual]

Computes the conf_index configuration of the layer.

Reimplemented from PLearn::RBMLayer.

Definition at line 328 of file RBMRateLayer.cc.

References PLERROR.

{
    PLERROR("In RBMRateLayer::getConfiguration(): not implemented");
}
int PLearn::RBMRateLayer::getConfigurationCount ( ) [virtual]

Returns a number of different configurations the layer can be in.

Reimplemented from PLearn::RBMLayer.

Definition at line 323 of file RBMRateLayer.cc.

References PLearn::RBMLayer::INFINITE_CONFIGURATIONS.

OptionList & PLearn::RBMRateLayer::getOptionList ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 51 of file RBMRateLayer.cc.

OptionMap & PLearn::RBMRateLayer::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 51 of file RBMRateLayer.cc.

RemoteMethodMap & PLearn::RBMRateLayer::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 51 of file RBMRateLayer.cc.

void PLearn::RBMRateLayer::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Transforms a shallow copy into a deep copy.

Reimplemented from PLearn::RBMLayer.

Definition at line 273 of file RBMRateLayer.cc.

References PLearn::RBMLayer::makeDeepCopyFromShallowCopy().

{
    inherited::makeDeepCopyFromShallowCopy(copies);
    //deepCopyField(tmp_softmax, copies);
}

Here is the call graph for this function:


Member Data Documentation

Reimplemented from PLearn::RBMLayer.

Definition at line 127 of file RBMRateLayer.h.

Vec PLearn::RBMRateLayer::tmp_softmax [mutable, protected]

Definition at line 137 of file RBMRateLayer.h.


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines