PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Static Protected Member Functions | Private Types | Private Member Functions | Private Attributes
PLearn::RBMConv2DConnection Class Reference

Filter between two linear layers of a 2D convolutional RBM. More...

#include <RBMConv2DConnection.h>

Inheritance diagram for PLearn::RBMConv2DConnection:
Inheritance graph
[legend]
Collaboration diagram for PLearn::RBMConv2DConnection:
Collaboration graph
[legend]

List of all members.

Public Member Functions

 RBMConv2DConnection (real the_learning_rate=0)
 Default constructor.
virtual void accumulatePosStats (const Vec &down_values, const Vec &up_values)
 Accumulates positive phase statistics to *_pos_stats.
virtual void accumulatePosStats (const Mat &down_values, const Mat &up_values)
virtual void accumulateNegStats (const Vec &down_values, const Vec &up_values)
 Accumulates negative phase statistics to *_neg_stats.
virtual void accumulateNegStats (const Mat &down_values, const Mat &up_values)
virtual void update ()
 Updates parameters according to contrastive divergence gradient.
virtual void update (const Vec &pos_down_values, const Vec &pos_up_values, const Vec &neg_down_values, const Vec &neg_up_values)
 Updates parameters according to contrastive divergence gradient, not using the statistics but the explicit values passed.
virtual void update (const Mat &pos_down_values, const Mat &pos_up_values, const Mat &neg_down_values, const Mat &neg_up_values)
 Updates parameters according to contrastive divergence gradient, not using the statistics but explicit matrix values.
virtual void clearStats ()
 Clear all information accumulated during stats.
virtual void computeProduct (int start, int length, const Vec &activations, bool accumulate=false) const
 Computes the vectors of activation of "length" units, starting from "start", and stores them into "activations".
virtual void computeProducts (int start, int length, Mat &activations, bool accumulate=false) const
 Same as 'computeProduct' but for mini-batches.
virtual void bpropUpdate (const Vec &input, const Vec &output, Vec &input_gradient, const Vec &output_gradient, bool accumulate=false)
 Adapt based on the output gradient: this method should only be called just after a corresponding fprop; it should be called with the same arguments as fprop for the first two arguments (and output should not have been modified since then).
virtual void bpropUpdate (const Mat &inputs, const Mat &outputs, Mat &input_gradients, const Mat &output_gradients, bool accumulate=false)
 SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient)
virtual void bpropAccUpdate (const TVec< Mat * > &ports_value, const TVec< Mat * > &ports_gradient)
 Perform a back propagation step (also updating parameters according to the provided gradient).
virtual void forget ()
 reset the parameters to the state they would be BEFORE starting training.
virtual int nParameters () const
 optionally perform some processing after training, or after a series of fprop/bpropUpdate calls to prepare the model for truly out-of-sample operation.
virtual Vec makeParametersPointHere (const Vec &global_parameters)
 Make the parameters data be sub-vectors of the given global_parameters.
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual RBMConv2DConnectiondeepCopy (CopiesMap &copies) const
virtual void build ()
 Post-constructor.
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.

Static Public Member Functions

static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

int down_image_length
 Length of the down image.
int down_image_width
 Width of the down image.
int up_image_length
 Length of the up image.
int up_image_width
 Width of the up image.
int kernel_step1
 "Vertical" step
int kernel_step2
 "Horizontal" step
Mat kernel
 Matrix containing the convolution kernel (filter)
int kernel_length
 Length of the kernel.
int kernel_width
 Width of the kernel.
Mat kernel_pos_stats
 Accumulates positive contribution to the weights' gradient.
Mat kernel_neg_stats
 Accumulates negative contribution to the weights' gradient.
Mat kernel_inc
 Used if momentum != 0.

Static Public Attributes

static StaticInitializer _static_initializer_

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares the class options.

Private Types

typedef RBMConnection inherited

Private Member Functions

void build_ ()
 This does the actual building.

Private Attributes

Mat down_image
Mat up_image
Mat down_image_gradient
Mat up_image_gradient
Mat kernel_gradient

Detailed Description

Filter between two linear layers of a 2D convolutional RBM.

Todo:
: yes

Definition at line 54 of file RBMConv2DConnection.h.


Member Typedef Documentation

Reimplemented from PLearn::RBMConnection.

Definition at line 56 of file RBMConv2DConnection.h.


Constructor & Destructor Documentation

PLearn::RBMConv2DConnection::RBMConv2DConnection ( real  the_learning_rate = 0)

Default constructor.

Definition at line 54 of file RBMConv2DConnection.cc.


Member Function Documentation

string PLearn::RBMConv2DConnection::_classname_ ( ) [static]

Reimplemented from PLearn::RBMConnection.

Definition at line 52 of file RBMConv2DConnection.cc.

OptionList & PLearn::RBMConv2DConnection::_getOptionList_ ( ) [static]

Reimplemented from PLearn::RBMConnection.

Definition at line 52 of file RBMConv2DConnection.cc.

RemoteMethodMap & PLearn::RBMConv2DConnection::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::RBMConnection.

Definition at line 52 of file RBMConv2DConnection.cc.

bool PLearn::RBMConv2DConnection::_isa_ ( const Object o) [static]

Reimplemented from PLearn::RBMConnection.

Definition at line 52 of file RBMConv2DConnection.cc.

Object * PLearn::RBMConv2DConnection::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::Object.

Definition at line 52 of file RBMConv2DConnection.cc.

StaticInitializer RBMConv2DConnection::_static_initializer_ & PLearn::RBMConv2DConnection::_static_initialize_ ( ) [static]

Reimplemented from PLearn::RBMConnection.

Definition at line 52 of file RBMConv2DConnection.cc.

void PLearn::RBMConv2DConnection::accumulateNegStats ( const Vec down_values,
const Vec up_values 
) [virtual]

Accumulates negative phase statistics to *_neg_stats.

Implements PLearn::RBMConnection.

Definition at line 200 of file RBMConv2DConnection.cc.

References PLearn::convolve2Dbackprop(), down_image, down_image_length, down_image_width, kernel_neg_stats, kernel_step1, kernel_step2, PLearn::RBMConnection::neg_count, PLearn::TVec< T >::toMat(), up_image, up_image_length, and up_image_width.

{
    down_image = down_values.toMat( down_image_length, down_image_width );
    up_image = up_values.toMat( up_image_length, up_image_width );
    /*  for i=0 to up_image_length:
     *   for j=0 to up_image_width:
     *     for l=0 to kernel_length:
     *       for m=0 to kernel_width:
     *         kernel_neg_stats(l,m) +=
     *           down_image(step1*i+l,step2*j+m) * up_image(i,j)
     */
    convolve2Dbackprop( down_image, up_image, kernel_neg_stats,
                        kernel_step1, kernel_step2, true );

    neg_count++;
}

Here is the call graph for this function:

virtual void PLearn::RBMConv2DConnection::accumulateNegStats ( const Mat down_values,
const Mat up_values 
) [inline, virtual]

Implements PLearn::RBMConnection.

Definition at line 123 of file RBMConv2DConnection.h.

References PLASSERT_MSG.

    {
        PLASSERT_MSG( false, "Not implemented" );
    }
virtual void PLearn::RBMConv2DConnection::accumulatePosStats ( const Mat down_values,
const Mat up_values 
) [inline, virtual]

Implements PLearn::RBMConnection.

Definition at line 113 of file RBMConv2DConnection.h.

References PLASSERT_MSG.

    {
        PLASSERT_MSG( false, "Not implemented" );
    }
void PLearn::RBMConv2DConnection::accumulatePosStats ( const Vec down_values,
const Vec up_values 
) [virtual]

Accumulates positive phase statistics to *_pos_stats.

Implements PLearn::RBMConnection.

Definition at line 181 of file RBMConv2DConnection.cc.

References PLearn::convolve2Dbackprop(), down_image, down_image_length, down_image_width, kernel_pos_stats, kernel_step1, kernel_step2, PLearn::RBMConnection::pos_count, PLearn::TVec< T >::toMat(), up_image, up_image_length, and up_image_width.

{
    down_image = down_values.toMat( down_image_length, down_image_width );
    up_image = up_values.toMat( up_image_length, up_image_width );

    /*  for i=0 to up_image_length:
     *   for j=0 to up_image_width:
     *     for l=0 to kernel_length:
     *       for m=0 to kernel_width:
     *         kernel_pos_stats(l,m) +=
     *           down_image(step1*i+l,step2*j+m) * up_image(i,j)
     */
    convolve2Dbackprop( down_image, up_image, kernel_pos_stats,
                        kernel_step1, kernel_step2, true );

    pos_count++;
}

Here is the call graph for this function:

void PLearn::RBMConv2DConnection::bpropAccUpdate ( const TVec< Mat * > &  ports_value,
const TVec< Mat * > &  ports_gradient 
) [virtual]

Perform a back propagation step (also updating parameters according to the provided gradient).

The matrices in 'ports_value' must be the same as the ones given in a previous call to 'fprop' (and thus they should in particular contain the result of the fprop computation). However, they are not necessarily the same as the ones given in the LAST call to 'fprop': if there is a need to store an internal module state, this should be done using a specific port to store this state. Each Mat* pointer in the 'ports_gradient' vector can be one of:

  • a full matrix : this is the gradient that is provided to the module, and can be used to compute other ports' gradient.
  • an empty matrix: this is a gradient we want to compute and accumulate into. This matrix must have length 0 and a width equal to the width of the corresponding matrix in the 'ports_value' vector (we can thus accumulate gradients using PLearn's ability to keep intact stored values when resizing a matrix' length).
  • a NULL pointer : this is a gradient that is not available, but does not need to be returned (or even computed). The default version tries to use the standard mini-batch bpropUpdate method, when possible.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 639 of file RBMConv2DConnection.cc.

References PLearn::backConvolve2Dbackprop(), PLearn::TMat< T >::clear(), PLearn::convolve2Dbackprop(), down_image, down_image_gradient, down_image_length, down_image_width, PLearn::RBMConnection::down_size, PLearn::TMat< T >::isEmpty(), kernel, kernel_gradient, kernel_step1, kernel_step2, PLearn::RBMConnection::learning_rate, PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), PLearn::multiplyAcc(), PLearn::OnlineLearningModule::nPorts(), PLASSERT, PLCHECK_MSG, PLearn::TMat< T >::resize(), up_image, up_image_gradient, up_image_length, up_image_width, PLearn::RBMConnection::up_size, and PLearn::TMat< T >::width().

{
    PLASSERT( ports_value.length() == nPorts()
              && ports_gradient.length() == nPorts() );

    Mat* down = ports_value[0];
    Mat* up = ports_value[1];
    Mat* down_grad = ports_gradient[0];
    Mat* up_grad = ports_gradient[1];

    PLASSERT( down && !down->isEmpty() );
    PLASSERT( up && !up->isEmpty() );

    int batch_size = down->length();
    PLASSERT( up->length() == batch_size );

    // If we have up_grad
    if( up_grad && !up_grad->isEmpty() )
    {
        // down_grad should not be provided
        PLASSERT( !down_grad || down_grad->isEmpty() );
        PLASSERT( up_grad->length() == batch_size );
        PLASSERT( up_grad->width() == up_size );

        // If we want down_grad
        bool compute_down_grad = false;
        if( down_grad && down_grad->isEmpty() )
        {
            PLASSERT( down_grad->width() == down_size );
            down_grad->resize(batch_size, down_size);
            compute_down_grad = true;
        }

        kernel_gradient.clear();
        for (int k=0; k<batch_size; k++)
        {
            down_image = (*down)(k).toMat(down_image_length, down_image_width);
            up_image = (*up)(k).toMat(up_image_length, up_image_width);
            up_image_gradient = (*up_grad)(k)
                .toMat(up_image_length, up_image_width);

            if( compute_down_grad )
            {
                down_image_gradient = (*down_grad)(k)
                    .toMat(down_image_length, down_image_width);
                convolve2Dbackprop(down_image, kernel,
                                   up_image_gradient, down_image_gradient,
                                   kernel_gradient,
                                   kernel_step1, kernel_step2, true);
            }
            else
                convolve2Dbackprop(down_image, up_image_gradient,
                                   kernel_gradient,
                                   kernel_step1, kernel_step2, true);
        }
        // kernel -= learning_rate/n * kernel_gradient
        multiplyAcc(kernel, kernel_gradient, -learning_rate/batch_size);
    }
    else if( down_grad && !down_grad->isEmpty() )
    {
        PLASSERT( down_grad->length() == batch_size );
        PLASSERT( down_grad->width() == down_size );

        // If we want up_grad
        bool compute_up_grad = false;
        if( up_grad && up_grad->isEmpty() )
        {
            PLASSERT( up_grad->width() == up_size );
            up_grad->resize(batch_size, up_size);
            compute_up_grad = true;
        }

        kernel_gradient.clear();
        for (int k=0; k<batch_size; k++)
        {
            down_image = (*down)(k).toMat(down_image_length, down_image_width);
            up_image = (*up)(k).toMat(up_image_length, up_image_width);
            down_image_gradient = (*down_grad)(k)
                .toMat(down_image_length, down_image_width);

            if( compute_up_grad )
            {
                up_image_gradient = (*up_grad)(k)
                    .toMat(up_image_length, up_image_width);
                backConvolve2Dbackprop(kernel, up_image, up_image_gradient,
                                       down_image_gradient, kernel_gradient,
                                       kernel_step1, kernel_step2, true);
            }
            else
                backConvolve2Dbackprop(up_image, down_image_gradient,
                                       kernel_gradient,
                                       kernel_step1, kernel_step2, true);
        }
        // kernel -= learning_rate/n * kernel_gradient
        multiplyAcc(kernel, kernel_gradient, -learning_rate/batch_size);
    }
    else
        PLCHECK_MSG( false,
                     "Unknown port configuration" );
}

Here is the call graph for this function:

void PLearn::RBMConv2DConnection::bpropUpdate ( const Vec input,
const Vec output,
Vec input_gradient,
const Vec output_gradient,
bool  accumulate = false 
) [virtual]

Adapt based on the output gradient: this method should only be called just after a corresponding fprop; it should be called with the same arguments as fprop for the first two arguments (and output should not have been modified since then).

this version allows to obtain the input gradient as well

Since sub-classes are supposed to learn ONLINE, the object is 'ready-to-be-used' just after any bpropUpdate. N.B. A DEFAULT IMPLEMENTATION IS PROVIDED IN THE SUPER-CLASS, WHICH JUST CALLS bpropUpdate(input, output, input_gradient, output_gradient) AND IGNORES INPUT GRADIENT. this version allows to obtain the input gradient as well N.B. THE DEFAULT IMPLEMENTATION IN SUPER-CLASS JUST RAISES A PLERROR.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 556 of file RBMConv2DConnection.cc.

References PLearn::convolve2Dbackprop(), down_image, down_image_gradient, down_image_length, down_image_width, PLearn::RBMConnection::down_size, kernel, kernel_gradient, kernel_step1, kernel_step2, PLearn::RBMConnection::learning_rate, PLearn::multiplyAcc(), PLASSERT, PLASSERT_MSG, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), PLearn::TVec< T >::toMat(), up_image, up_image_gradient, up_image_length, up_image_width, and PLearn::RBMConnection::up_size.

{
    PLASSERT( input.size() == down_size );
    PLASSERT( output.size() == up_size );
    PLASSERT( output_gradient.size() == up_size );

    if( accumulate )
    {
        PLASSERT_MSG( input_gradient.size() == down_size,
                      "Cannot resize input_gradient AND accumulate into it" );
    }
    else
        input_gradient.resize( down_size );

    down_image = input.toMat( down_image_length, down_image_width );
    up_image = output.toMat( up_image_length, up_image_width );
    down_image_gradient = input_gradient.toMat( down_image_length,
                                                down_image_width );
    up_image_gradient = output_gradient.toMat( up_image_length,
                                               up_image_width );

    // update input_gradient and kernel_gradient
    convolve2Dbackprop( down_image, kernel,
                        up_image_gradient, down_image_gradient,
                        kernel_gradient,
                        kernel_step1, kernel_step2, accumulate );

    // kernel -= learning_rate * kernel_gradient
    multiplyAcc( kernel, kernel_gradient, -learning_rate );
}

Here is the call graph for this function:

void PLearn::RBMConv2DConnection::bpropUpdate ( const Mat inputs,
const Mat outputs,
Mat input_gradients,
const Mat output_gradients,
bool  accumulate = false 
) [virtual]

SOON TO BE DEPRECATED, USE bpropAccUpdate(const TVec<Mat*>& ports_value, const TVec<Mat*>& ports_gradient)

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 590 of file RBMConv2DConnection.cc.

References PLearn::TMat< T >::clear(), PLearn::convolve2Dbackprop(), down_image, down_image_gradient, down_image_length, down_image_width, PLearn::RBMConnection::down_size, kernel, kernel_gradient, kernel_step1, kernel_step2, PLearn::RBMConnection::learning_rate, PLearn::TMat< T >::length(), PLearn::multiplyAcc(), PLASSERT, PLASSERT_MSG, PLearn::TMat< T >::resize(), up_image, up_image_gradient, up_image_length, up_image_width, PLearn::RBMConnection::up_size, and PLearn::TMat< T >::width().

{
    PLASSERT( inputs.width() == down_size );
    PLASSERT( outputs.width() == up_size );
    PLASSERT( output_gradients.width() == up_size );

    int batch_size = inputs.length();
    PLASSERT( outputs.length() == batch_size );
    PLASSERT( output_gradients.length() == batch_size );

    if( accumulate )
    {
        PLASSERT_MSG( input_gradients.width() == down_size &&
                      input_gradients.length() == batch_size,
                      "Cannot resize input_gradient AND accumulate into it" );
    }
    else
    {
        input_gradients.resize(batch_size, down_size);
        input_gradients.clear();
    }

    kernel_gradient.clear();
    for( int k=0; k<batch_size; k++ )
    {
        down_image = inputs(k).toMat( down_image_length, down_image_width );
        up_image = outputs(k).toMat( up_image_length, up_image_width );
        down_image_gradient = input_gradients(k)
            .toMat( down_image_length, down_image_width );
        up_image_gradient = output_gradients(k)
            .toMat( up_image_length, up_image_width );

        // update input_gradient and kernel_gradient
        convolve2Dbackprop( down_image, kernel,
                            up_image_gradient, down_image_gradient,
                            kernel_gradient,
                            kernel_step1, kernel_step2, true );
    }

    // kernel -= learning_rate/n * kernel_gradient
    multiplyAcc( kernel, kernel_gradient, -learning_rate/batch_size );
}

Here is the call graph for this function:

void PLearn::RBMConv2DConnection::build ( ) [virtual]

Post-constructor.

The normal implementation should call simply inherited::build(), then this class's build_(). This method should be callable again at later times, after modifying some option fields to change the "architecture" of the object.

Reimplemented from PLearn::RBMConnection.

Definition at line 159 of file RBMConv2DConnection.cc.

References PLearn::RBMConnection::build(), and build_().

Here is the call graph for this function:

void PLearn::RBMConv2DConnection::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::RBMConnection.

Definition at line 115 of file RBMConv2DConnection.cc.

References clearStats(), down_image_length, down_image_width, PLearn::RBMConnection::down_size, PLearn::endl(), forget(), kernel, kernel_gradient, kernel_inc, kernel_length, kernel_neg_stats, kernel_pos_stats, kernel_step1, kernel_step2, kernel_width, PLearn::TMat< T >::length(), PLearn::RBMConnection::momentum, PLearn::OnlineLearningModule::output_size, PLASSERT, PLearn::TMat< T >::resize(), up_image_length, up_image_width, PLearn::RBMConnection::up_size, and PLearn::TMat< T >::width().

Referenced by build().

{
    MODULE_LOG << "build_() called" << endl;

    down_size = down_image_length * down_image_width;
    up_size = up_image_length * up_image_width;

    PLASSERT( down_image_length > 0 );
    PLASSERT( down_image_width > 0 );
    PLASSERT( down_image_length * down_image_width == down_size );
    PLASSERT( up_image_length > 0 );
    PLASSERT( up_image_width > 0 );
    PLASSERT( up_image_length * up_image_width == up_size );
    PLASSERT( kernel_step1 > 0 );
    PLASSERT( kernel_step2 > 0 );

    kernel_length = down_image_length - kernel_step1 * (up_image_length-1);
    PLASSERT( kernel_length > 0 );
    kernel_width = down_image_width - kernel_step2 * (up_image_width-1);
    PLASSERT( kernel_width > 0 );

    output_size = 0;
    bool needs_forget = false; // do we need to reinitialize the parameters?

    if( kernel.length() != kernel_length ||
        kernel.width() != kernel_width )
    {
        kernel.resize( kernel_length, kernel_width );
        needs_forget = true;
    }

    kernel_pos_stats.resize( kernel_length, kernel_width );
    kernel_neg_stats.resize( kernel_length, kernel_width );
    kernel_gradient.resize( kernel_length, kernel_width );

    if( momentum != 0. )
        kernel_inc.resize( kernel_length, kernel_width );

    if( needs_forget )
        forget();

    clearStats();
}

Here is the call graph for this function:

Here is the caller graph for this function:

string PLearn::RBMConv2DConnection::classname ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 52 of file RBMConv2DConnection.cc.

void PLearn::RBMConv2DConnection::clearStats ( ) [virtual]

Clear all information accumulated during stats.

Implements PLearn::RBMConnection.

Definition at line 428 of file RBMConv2DConnection.cc.

References PLearn::TMat< T >::clear(), kernel_neg_stats, kernel_pos_stats, PLearn::RBMConnection::neg_count, and PLearn::RBMConnection::pos_count.

Referenced by build_(), forget(), and update().

{
    kernel_pos_stats.clear();
    kernel_neg_stats.clear();

    pos_count = 0;
    neg_count = 0;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::RBMConv2DConnection::computeProduct ( int  start,
int  length,
const Vec activations,
bool  accumulate = false 
) const [virtual]

Computes the vectors of activation of "length" units, starting from "start", and stores them into "activations".

"start" indexes an up unit if "going_up", else a down unit.

Implements PLearn::RBMConnection.

Definition at line 438 of file RBMConv2DConnection.cc.

References PLearn::backConvolve2D(), PLearn::convolve2D(), PLearn::TVec< T >::length(), m, PLASSERT, PLearn::TVec< T >::subVec(), and PLearn::TVec< T >::toMat().

{
    // Unoptimized way, that computes all the activations and return a subvec
    PLASSERT( activations.length() == length );
    if( going_up )
    {
        PLASSERT( start+length <= up_size );
        down_image = input_vec.toMat( down_image_length, down_image_width );

        // special cases:
        if( length == 1 )
        {
            real act = 0;
            real* k = kernel.data();
            real* di = down_image.data()
                        + kernel_step1*(start / down_image_width)
                        + kernel_step2*(start % down_image_width);
            for( int l=0; l<kernel_length; l++, di+=down_image_width )
                for( int m=0; m<kernel_width; m++ )
                    act += di[m] * k[m];
            if( accumulate )
                activations[0] += act;
            else
                activations[0] = act;
        }
        else if( start == 0 && length == up_size )
        {
            up_image = activations.toMat( up_image_length, up_image_width );
            convolve2D( down_image, kernel, up_image,
                        kernel_step1, kernel_step2, accumulate );
        }
        else
        {
            up_image = Mat( up_image_length, up_image_width );
            convolve2D( down_image, kernel, up_image,
                        kernel_step1, kernel_step2, false );
            if( accumulate )
                activations += up_image.toVec().subVec( start, length );
            else
                activations << up_image.toVec().subVec( start, length );
        }
    }
    else
    {
        PLASSERT( start+length <= down_size );
        up_image = input_vec.toMat( up_image_length, up_image_width );

        // special cases
        if( start == 0 && length == down_size )
        {
            down_image = activations.toMat( down_image_length,
                                            down_image_width );
            backConvolve2D( down_image, kernel, up_image,
                            kernel_step1, kernel_step2, accumulate );
        }
        else
        {
            down_image = Mat( down_image_length, down_image_width );
            backConvolve2D( down_image, kernel, up_image,
                            kernel_step1, kernel_step2, false );
            if( accumulate )
                activations += down_image.toVec().subVec( start, length );
            else
                activations << down_image.toVec().subVec( start, length );
        }
    }
}

Here is the call graph for this function:

void PLearn::RBMConv2DConnection::computeProducts ( int  start,
int  length,
Mat activations,
bool  accumulate = false 
) const [virtual]

Same as 'computeProduct' but for mini-batches.

Implements PLearn::RBMConnection.

Definition at line 506 of file RBMConv2DConnection.cc.

References PLearn::backConvolve2D(), PLearn::convolve2D(), down_image, down_image_length, down_image_width, PLearn::RBMConnection::down_size, PLearn::RBMConnection::going_up, PLearn::RBMConnection::inputs_mat, kernel, kernel_step1, kernel_step2, PLearn::TMat< T >::length(), PLASSERT, PLCHECK_MSG, PLearn::TMat< T >::resize(), up_image, up_image_length, up_image_width, PLearn::RBMConnection::up_size, and PLearn::TMat< T >::width().

{
    PLASSERT( activations.width() == length );
    int batch_size = inputs_mat.length();
    activations.resize( batch_size, length);
    if( going_up )
    {
        PLASSERT( start+length <= up_size );
        // usual case
        if( start == 0 && length == up_size )
            for( int k=0; k<batch_size; k++ )
            {
                up_image = activations(k)
                    .toMat(up_image_length, up_image_width);
                down_image = inputs_mat(k)
                    .toMat(down_image_length, down_image_width);

                convolve2D(down_image, kernel, up_image,
                           kernel_step1, kernel_step2, accumulate);
            }
        else
            PLCHECK_MSG(false,
                        "Unusual case of use (start!=0 or length!=up_size)\n"
                        "not implemented yet.");
    }
    else
    {
        PLASSERT( start+length <= down_size );
        // usual case
        if( start == 0 && length == down_size )
            for( int k=0; k<batch_size; k++ )
            {
                up_image = inputs_mat(k)
                    .toMat(up_image_length, up_image_width);
                down_image = activations(k)
                    .toMat(down_image_length, down_image_width);

                backConvolve2D(down_image, kernel, up_image,
                               kernel_step1, kernel_step2, accumulate);
            }
        else
            PLCHECK_MSG(false,
                        "Unusual case of use (start!=0 or length!=down_size)\n"
                        "not implemented yet.");
    }
}

Here is the call graph for this function:

void PLearn::RBMConv2DConnection::declareOptions ( OptionList ol) [static, protected]

Declares the class options.

Reimplemented from PLearn::RBMConnection.

Definition at line 67 of file RBMConv2DConnection.cc.

References PLearn::OptionBase::buildoption, PLearn::declareOption(), PLearn::RBMConnection::declareOptions(), down_image_length, down_image_width, PLearn::RBMConnection::down_size, kernel, kernel_step1, kernel_step2, PLearn::OptionBase::learntoption, PLearn::redeclareOption(), up_image_length, up_image_width, and PLearn::RBMConnection::up_size.

{
    declareOption(ol, "down_image_length",
                  &RBMConv2DConnection::down_image_length,
                  OptionBase::buildoption,
                  "Length of the down image");

    declareOption(ol, "down_image_width",
                  &RBMConv2DConnection::down_image_width,
                  OptionBase::buildoption,
                  "Width of the down image");

    declareOption(ol, "up_image_length",
                  &RBMConv2DConnection::up_image_length,
                  OptionBase::buildoption,
                  "Length of the up image");

    declareOption(ol, "up_image_width",
                  &RBMConv2DConnection::up_image_width,
                  OptionBase::buildoption,
                  "Width of the up image");

    declareOption(ol, "kernel_step1", &RBMConv2DConnection::kernel_step1,
                  OptionBase::buildoption,
                  "\"Vertical\" step of the convolution");

    declareOption(ol, "kernel_step2", &RBMConv2DConnection::kernel_step2,
                  OptionBase::buildoption,
                  "\"Horizontal\" step of the convolution");

    declareOption(ol, "kernel", &RBMConv2DConnection::kernel,
                  OptionBase::learntoption,
                  "Matrix containing the convolution kernel (filter)");

    // Now call the parent class' declareOptions
    inherited::declareOptions(ol);

    redeclareOption(ol, "down_size",
                    &RBMConv2DConnection::down_size,
                    OptionBase::learntoption,
                    "Equals to down_image_length × down_image_width");

    redeclareOption(ol, "up_size",
                    &RBMConv2DConnection::up_size,
                    OptionBase::learntoption,
                    "Equals to up_image_length × up_image_width");
}

Here is the call graph for this function:

static const PPath& PLearn::RBMConv2DConnection::declaringFile ( ) [inline, static]

Reimplemented from PLearn::RBMConnection.

Definition at line 210 of file RBMConv2DConnection.h.

:
    //#####  Protected Member Functions  ######################################
RBMConv2DConnection * PLearn::RBMConv2DConnection::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::RBMConnection.

Definition at line 52 of file RBMConv2DConnection.cc.

void PLearn::RBMConv2DConnection::forget ( ) [virtual]

reset the parameters to the state they would be BEFORE starting training.

Note that this method is necessarily called from build().

Implements PLearn::OnlineLearningModule.

Definition at line 744 of file RBMConv2DConnection.cc.

References PLearn::TMat< T >::clear(), clearStats(), d, PLearn::RBMConnection::initialization_method, kernel, kernel_length, kernel_width, PLearn::max(), PLWARNING, PLearn::OnlineLearningModule::random_gen, and PLearn::sqrt().

Referenced by build_().

{
    clearStats();
    if( initialization_method == "zero" )
        kernel.clear();
    else
    {
        if( !random_gen )
        {
            PLWARNING( "RBMConv2DConnection: cannot forget() without"
                       " random_gen" );
            return;
        }

        real d = 1. / max( kernel_length, kernel_width );
        if( initialization_method == "uniform_sqrt" )
            d = sqrt( d );

        random_gen->fill_random_uniform( kernel, -d, d );
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

OptionList & PLearn::RBMConv2DConnection::getOptionList ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 52 of file RBMConv2DConnection.cc.

OptionMap & PLearn::RBMConv2DConnection::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 52 of file RBMConv2DConnection.cc.

RemoteMethodMap & PLearn::RBMConv2DConnection::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 52 of file RBMConv2DConnection.cc.

void PLearn::RBMConv2DConnection::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]
Vec PLearn::RBMConv2DConnection::makeParametersPointHere ( const Vec global_parameters) [virtual]

Make the parameters data be sub-vectors of the given global_parameters.

The argument should have size >= nParameters. The result is a Vec that starts just after this object's parameters end, i.e. result = global_parameters.subVec(nParameters(),global_parameters.size()-nParameters()); This allows to easily chain calls of this method on multiple RBMParameters.

Implements PLearn::RBMConnection.

Definition at line 788 of file RBMConv2DConnection.cc.

References PLearn::TVec< T >::data(), kernel, m, PLearn::TMat< T >::makeSharedValue(), n, PLERROR, PLearn::TVec< T >::size(), PLearn::TMat< T >::size(), and PLearn::TVec< T >::subVec().

{
    int n=kernel.size();
    int m = global_parameters.size();
    if (m<n)
        PLERROR("RBMConv2DConnection::makeParametersPointHere: argument has length %d, should be longer than nParameters()=%d",m,n);
    real* p = global_parameters.data();
    kernel.makeSharedValue(p,n);
    return global_parameters.subVec(n,m-n);
}

Here is the call graph for this function:

int PLearn::RBMConv2DConnection::nParameters ( ) const [virtual]

optionally perform some processing after training, or after a series of fprop/bpropUpdate calls to prepare the model for truly out-of-sample operation.

return the number of parameters

THE DEFAULT IMPLEMENTATION PROVIDED IN THE SUPER-CLASS DOES NOT DO ANYTHING. return the number of parameters

Implements PLearn::RBMConnection.

Definition at line 778 of file RBMConv2DConnection.cc.

References kernel, and PLearn::TMat< T >::size().

{
    return kernel.size();
}

Here is the call graph for this function:

void PLearn::RBMConv2DConnection::update ( const Vec pos_down_values,
const Vec pos_up_values,
const Vec neg_down_values,
const Vec neg_up_values 
) [virtual]

Updates parameters according to contrastive divergence gradient, not using the statistics but the explicit values passed.

Reimplemented from PLearn::RBMConnection.

Definition at line 268 of file RBMConv2DConnection.cc.

References PLearn::TMat< T >::data(), PLearn::TVec< T >::data(), down_image_length, down_image_width, PLearn::RBMConnection::down_size, i, j, kernel, kernel_inc, kernel_length, kernel_step1, kernel_step2, kernel_width, PLearn::RBMConnection::learning_rate, PLearn::TVec< T >::length(), m, PLearn::TMat< T >::mod(), PLearn::RBMConnection::momentum, PLearn::multiplyAcc(), PLASSERT, PLearn::TMat< T >::resize(), up_image_length, up_image_width, and PLearn::RBMConnection::up_size.

{
    PLASSERT( pos_up_values.length() == up_size );
    PLASSERT( neg_up_values.length() == up_size );
    PLASSERT( pos_down_values.length() == down_size );
    PLASSERT( neg_down_values.length() == down_size );

    /*  for i=0 to up_image_length:
     *   for j=0 to up_image_width:
     *     for l=0 to kernel_length:
     *       for m=0 to kernel_width:
     *         kernel_neg_stats(l,m) += learning_rate *
     *           ( pos_down_image(step1*i+l,step2*j+m) * pos_up_image(i,j)
     *             - neg_down_image(step1*i+l,step2*j+m) * neg_up_image(i,j) )
     */

    real* puv = pos_up_values.data();
    real* nuv = neg_up_values.data();
    real* pdv = pos_down_values.data();
    real* ndv = neg_down_values.data();
    int k_mod = kernel.mod();

    if( momentum == 0. )
    {
        for( int i=0; i<up_image_length; i++,
                                         puv+=up_image_width,
                                         nuv+=up_image_width,
                                         pdv+=kernel_step1*down_image_width,
                                         ndv+=kernel_step1*down_image_width )
        {
            // copies to iterate over columns
            real* pdv1 = pdv;
            real* ndv1 = ndv;
            for( int j=0; j<up_image_width; j++,
                                            pdv1+=kernel_step2,
                                            ndv1+=kernel_step2 )
            {
                real* k = kernel.data();
                real* pdv2 = pdv1; // copy to iterate over sub-rows
                real* ndv2 = ndv1;
                real puv_ij = puv[j];
                real nuv_ij = nuv[j];
                for( int l=0; l<kernel_length; l++, k+=k_mod,
                                               pdv2+=down_image_width,
                                               ndv2+=down_image_width )
                    for( int m=0; m<kernel_width; m++ )
                        k[m] += learning_rate *
                            (pdv2[m] * puv_ij - ndv2[m] * nuv_ij);
            }
        }
    }
    else
    {
        // ensure that weights_inc has the right size
        kernel_inc.resize( kernel_length, kernel_width );
        kernel_inc *= momentum;

        int kinc_mod = kernel_inc.mod();
        for( int i=0; i<down_image_length; i++,
                                           puv+=up_image_width,
                                           nuv+=up_image_width,
                                           pdv+=kernel_step1*down_image_width,
                                           ndv+=kernel_step1*down_image_width )
        {
            // copies to iterate over columns
            real* pdv1 = pdv;
            real* ndv1 = ndv;
            for( int j=0; j<down_image_width; j++,
                                              pdv1+=kernel_step2,
                                              ndv1+=kernel_step2 )
            {
                real* kinc = kernel_inc.data();
                real* pdv2 = pdv1; // copy to iterate over sub-rows
                real* ndv2 = ndv1;
                real puv_ij = puv[j];
                real nuv_ij = nuv[j];
                for( int l=0; l<kernel_length; l++, kinc+=kinc_mod,
                                               pdv2+=down_image_width,
                                               ndv2+=down_image_width )
                    for( int m=0; m<kernel_width; m++ )
                        kinc[m] += pdv2[m] * puv_ij - ndv2[m] * nuv_ij;
            }
        }
        multiplyAcc( kernel, kernel_inc, learning_rate );
    }
}

Here is the call graph for this function:

void PLearn::RBMConv2DConnection::update ( const Mat pos_down_values,
const Mat pos_up_values,
const Mat neg_down_values,
const Mat neg_up_values 
) [virtual]

Updates parameters according to contrastive divergence gradient, not using the statistics but explicit matrix values.

Reimplemented from PLearn::RBMConnection.

Definition at line 358 of file RBMConv2DConnection.cc.

References b, PLearn::TMat< T >::data(), down_image_width, PLearn::RBMConnection::down_size, i, j, kernel, kernel_length, kernel_step1, kernel_step2, kernel_width, PLearn::RBMConnection::learning_rate, PLearn::TMat< T >::length(), m, PLearn::TMat< T >::mod(), PLearn::RBMConnection::momentum, PLASSERT, PLCHECK_MSG, up_image_length, up_image_width, PLearn::RBMConnection::up_size, and PLearn::TMat< T >::width().

{
    PLASSERT( pos_up_values.width() == up_size );
    PLASSERT( neg_up_values.width() == up_size );
    PLASSERT( pos_down_values.width() == down_size );
    PLASSERT( neg_down_values.width() == down_size );

    int batch_size = pos_down_values.length();
    PLASSERT( pos_up_values.length() == batch_size );
    PLASSERT( neg_down_values.length() == batch_size );
    PLASSERT( neg_up_values.length() == batch_size );

    real norm_lr = learning_rate / batch_size;

    /*  for i=0 to up_image_length:
     *   for j=0 to up_image_width:
     *     for l=0 to kernel_length:
     *       for m=0 to kernel_width:
     *         kernel_neg_stats(l,m) += learning_rate *
     *           ( pos_down_image(step1*i+l,step2*j+m) * pos_up_image(i,j)
     *             - neg_down_image(step1*i+l,step2*j+m) * neg_up_image(i,j) )
     */

    if( momentum == 0. )
    {
        for( int b=0; b<batch_size; b++ )
        {
            real* puv = pos_up_values(b).data();
            real* nuv = neg_up_values(b).data();
            real* pdv = pos_down_values(b).data();
            real* ndv = neg_down_values(b).data();
            int k_mod = kernel.mod();

            for( int i=0; i<up_image_length;
                 i++,
                 puv+=up_image_width,
                 nuv+=up_image_width,
                 pdv+=kernel_step1*down_image_width,
                 ndv+=kernel_step1*down_image_width )
            {
                // copies to iterate over columns
                real* pdv1 = pdv;
                real* ndv1 = ndv;
                for( int j=0; j<up_image_width; j++,
                                                pdv1+=kernel_step2,
                                                ndv1+=kernel_step2 )
                {
                    real* k = kernel.data();
                    real* pdv2 = pdv1; // copy to iterate over sub-rows
                    real* ndv2 = ndv1;
                    real puv_ij = puv[j];
                    real nuv_ij = nuv[j];
                    for( int l=0; l<kernel_length; l++, k+=k_mod,
                                                   pdv2+=down_image_width,
                                                   ndv2+=down_image_width )
                        for( int m=0; m<kernel_width; m++ )
                            k[m] += norm_lr *
                                (pdv2[m] * puv_ij - ndv2[m] * nuv_ij);
                }
            }
        }
    }
    else
        PLCHECK_MSG(false,
                    "mini-batch and momentum don't work together yet");
}

Here is the call graph for this function:

void PLearn::RBMConv2DConnection::update ( ) [virtual]

Updates parameters according to contrastive divergence gradient.

Implements PLearn::RBMConnection.

Definition at line 218 of file RBMConv2DConnection.cc.

References clearStats(), PLearn::TMat< T >::data(), i, j, kernel, kernel_inc, kernel_length, kernel_neg_stats, kernel_pos_stats, kernel_width, PLearn::RBMConnection::learning_rate, PLearn::TMat< T >::mod(), PLearn::RBMConnection::momentum, PLearn::RBMConnection::neg_count, PLearn::RBMConnection::pos_count, and PLearn::TMat< T >::resize().

{
    // updates parameters
    // kernel += learning_rate * (kernel_pos_stats/pos_count
    //                              - kernel_neg_stats/neg_count)
    real pos_factor = learning_rate / pos_count;
    real neg_factor = -learning_rate / neg_count;

    real* k_i = kernel.data();
    real* kps_i = kernel_pos_stats.data();
    real* kns_i = kernel_neg_stats.data();
    int k_mod = kernel.mod();
    int kps_mod = kernel_pos_stats.mod();
    int kns_mod = kernel_neg_stats.mod();

    if( momentum == 0. )
    {
        // no need to use weights_inc
        for( int i=0 ; i<kernel_length ; i++, k_i+=k_mod,
                                         kps_i+=kps_mod, kns_i+=kns_mod )
            for( int j=0 ; j<kernel_width ; j++ )
                k_i[j] += pos_factor * kps_i[j] + neg_factor * kns_i[j];
    }
    else
    {
        // ensure that weights_inc has the right size
        kernel_inc.resize( kernel_length, kernel_width );

        // The update rule becomes:
        // kernel_inc = momentum * kernel_inc
        //               - learning_rate * (kernel_pos_stats/pos_count
        //                                  - kernel_neg_stats/neg_count);
        // kernel += kernel_inc;
        real* kinc_i = kernel_inc.data();
        int kinc_mod = kernel_inc.mod();
        for( int i=0 ; i<kernel_length ; i++, k_i += k_mod, kps_i += kps_mod,
                                         kns_i += kns_mod, kinc_i += kinc_mod )
            for( int j=0 ; j<kernel_width ; j++ )
            {
                kinc_i[j] = momentum * kinc_i[j]
                    + pos_factor * kps_i[j] + neg_factor * kns_i[j];
                k_i[j] += kinc_i[j];
            }
    }

    clearStats();
}

Here is the call graph for this function:


Member Data Documentation

Reimplemented from PLearn::RBMConnection.

Definition at line 210 of file RBMConv2DConnection.h.

Definition at line 236 of file RBMConv2DConnection.h.

Referenced by bpropAccUpdate(), bpropUpdate(), and makeDeepCopyFromShallowCopy().

Matrix containing the convolution kernel (filter)

Definition at line 82 of file RBMConv2DConnection.h.

Referenced by bpropAccUpdate(), bpropUpdate(), build_(), computeProducts(), declareOptions(), forget(), makeDeepCopyFromShallowCopy(), makeParametersPointHere(), nParameters(), and update().

Used if momentum != 0.

Definition at line 99 of file RBMConv2DConnection.h.

Referenced by build_(), makeDeepCopyFromShallowCopy(), and update().

Length of the kernel.

Definition at line 87 of file RBMConv2DConnection.h.

Referenced by build_(), forget(), and update().

Accumulates negative contribution to the weights' gradient.

Definition at line 96 of file RBMConv2DConnection.h.

Referenced by accumulateNegStats(), build_(), clearStats(), makeDeepCopyFromShallowCopy(), and update().

Accumulates positive contribution to the weights' gradient.

Definition at line 93 of file RBMConv2DConnection.h.

Referenced by accumulatePosStats(), build_(), clearStats(), makeDeepCopyFromShallowCopy(), and update().

Width of the kernel.

Definition at line 90 of file RBMConv2DConnection.h.

Referenced by build_(), forget(), and update().

Definition at line 237 of file RBMConv2DConnection.h.

Referenced by bpropAccUpdate(), bpropUpdate(), and makeDeepCopyFromShallowCopy().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines