PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Static Protected Member Functions | Protected Attributes | Private Types | Private Member Functions
PLearn::RBMMixedConnection Class Reference

Contains a matrix of other RBMConnections, acting as submatrix of the linear transformation this class computes. More...

#include <RBMMixedConnection.h>

Inheritance diagram for PLearn::RBMMixedConnection:
Inheritance graph
[legend]
Collaboration diagram for PLearn::RBMMixedConnection:
Collaboration graph
[legend]

List of all members.

Public Member Functions

 RBMMixedConnection ()
 Default constructor.
virtual void setLearningRate (real the_learning_rate)
 Sets the learning rate, also in the sub_connections.
virtual void setMomentum (real the_momentum)
 Sets the momentum, also in the sub_connections.
virtual void setAsUpInput (const Vec &input) const
 Sets 'input_vec' to 'input', and 'going_up' to false.
virtual void setAsDownInput (const Vec &input) const
 Sets 'input_vec' to 'input', and 'going_up' to true.
virtual void setAsUpInputs (const Mat &inputs) const
 Set 'inputs_mat' to 'inputs', and 'going_up' to false.
virtual void setAsDownInputs (const Mat &input) const
 Set 'inputs_mat' to 'inputs', and 'going_up' to true.
virtual void accumulatePosStats (const Vec &down_values, const Vec &up_values)
 Accumulates positive phase statistics to *_pos_stats.
virtual void accumulatePosStats (const Mat &down_values, const Mat &up_values)
virtual void accumulateNegStats (const Vec &down_values, const Vec &up_values)
 Accumulates negative phase statistics to *_neg_stats.
virtual void accumulateNegStats (const Mat &down_values, const Mat &up_values)
virtual void update ()
 Updates parameters according to contrastive divergence gradient.
virtual void update (const Vec &pos_down_values, const Vec &pos_up_values, const Vec &neg_down_values, const Vec &neg_up_values)
 Updates parameters according to contrastive divergence gradient, not using the statistics but the explicit values passed.
virtual void update (const Mat &pos_down_values, const Mat &pos_up_values, const Mat &neg_down_values, const Mat &neg_up_values)
 Not implemented.
virtual void clearStats ()
 Clear all information accumulated during stats.
virtual void computeProduct (int start, int length, const Vec &activations, bool accumulate=false) const
 Computes the vectors of activation of "length" units, starting from "start", and stores them into "activations".
virtual void computeProducts (int start, int length, Mat &activations, bool accumulate=false) const
 Same as 'computeProduct' but for mini-batches.
virtual void bpropUpdate (const Vec &input, const Vec &output, Vec &input_gradient, const Vec &output_gradient, bool accumulate=false)
 Adapt based on the output gradient: this method should only be called just after a corresponding fprop; it should be called with the same arguments as fprop for the first two arguments (and output should not have been modified since then).
virtual void bpropUpdate (const Mat &inputs, const Mat &outputs, Mat &input_gradients, const Mat &output_gradients, bool accumulate=false)
 Batch version.
virtual void bpropAccUpdate (const TVec< Mat * > &ports_value, const TVec< Mat * > &ports_gradient)
 Perform a back propagation step (also updating parameters according to the provided gradient).
virtual void forget ()
 reset the parameters to the state they would be BEFORE starting training.
virtual int nParameters () const
 optionally perform some processing after training, or after a series of fprop/bpropUpdate calls to prepare the model for truly out-of-sample operation.
virtual Vec makeParametersPointHere (const Vec &global_parameters)
 Make the parameters data be sub-vectors of the given global_parameters.
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual RBMMixedConnectiondeepCopy (CopiesMap &copies) const
virtual void build ()
 Post-constructor.
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.

Static Public Member Functions

static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

TMat< PP< RBMConnection > > sub_connections
 Matrix containing the sub-transformations (blocks)

Static Public Attributes

static StaticInitializer _static_initializer_

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares the class options.

Protected Attributes

TVec< intup_init_positions
 Initial vertical index of the blocks.
TVec< intup_block_sizes
 Vertical sizes of the blocks.
TVec< intdown_init_positions
 Initial horizontal index of the blocks.
TVec< intdown_block_sizes
 Horizontal sizes of the blocks.
TVec< introw_of
TVec< intcol_of
int n_up_blocks
 sub_connections.length()
int n_down_blocks
 sub_connections.width()

Private Types

typedef RBMConnection inherited

Private Member Functions

void build_ ()
 This does the actual building.

Detailed Description

Contains a matrix of other RBMConnections, acting as submatrix of the linear transformation this class computes.

If a PP<RBMConnection> is null, it will be considered as a 0-filled matrix.

Todo:
: yes

Definition at line 56 of file RBMMixedConnection.h.


Member Typedef Documentation

Reimplemented from PLearn::RBMConnection.

Definition at line 58 of file RBMMixedConnection.h.


Constructor & Destructor Documentation

PLearn::RBMMixedConnection::RBMMixedConnection ( )

Default constructor.

Definition at line 55 of file RBMMixedConnection.cc.

{}

Member Function Documentation

string PLearn::RBMMixedConnection::_classname_ ( ) [static]

Reimplemented from PLearn::RBMConnection.

Definition at line 50 of file RBMMixedConnection.cc.

OptionList & PLearn::RBMMixedConnection::_getOptionList_ ( ) [static]

Reimplemented from PLearn::RBMConnection.

Definition at line 50 of file RBMMixedConnection.cc.

RemoteMethodMap & PLearn::RBMMixedConnection::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::RBMConnection.

Definition at line 50 of file RBMMixedConnection.cc.

bool PLearn::RBMMixedConnection::_isa_ ( const Object o) [static]

Reimplemented from PLearn::RBMConnection.

Definition at line 50 of file RBMMixedConnection.cc.

Object * PLearn::RBMMixedConnection::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::Object.

Definition at line 50 of file RBMMixedConnection.cc.

StaticInitializer RBMMixedConnection::_static_initializer_ & PLearn::RBMMixedConnection::_static_initialize_ ( ) [static]

Reimplemented from PLearn::RBMConnection.

Definition at line 50 of file RBMMixedConnection.cc.

void PLearn::RBMMixedConnection::accumulateNegStats ( const Vec down_values,
const Vec up_values 
) [virtual]

Accumulates negative phase statistics to *_neg_stats.

Implements PLearn::RBMConnection.

Definition at line 340 of file RBMMixedConnection.cc.

References i, j, and PLearn::TVec< T >::subVec().

{
    for( int i=0 ; i<n_up_blocks ; i++ )
    {
        Vec sub_up_values = up_values.subVec( up_init_positions[i],
                                              up_block_sizes[i] );

        for( int j=0 ; j<n_down_blocks ; j++ )
        {
            if( sub_connections(i,j) )
            {
                Vec sub_down_values =
                    down_values.subVec( down_init_positions[j],
                                        sub_connections(i,j)->down_size );

                sub_connections(i,j)->accumulateNegStats( sub_down_values,
                                                          sub_up_values );
            }
        }
    }

    neg_count++;
}

Here is the call graph for this function:

virtual void PLearn::RBMMixedConnection::accumulateNegStats ( const Mat down_values,
const Mat up_values 
) [inline, virtual]

Implements PLearn::RBMConnection.

Definition at line 104 of file RBMMixedConnection.h.

References PLASSERT_MSG.

    {
        PLASSERT_MSG( false, "Not implemented" );
    }
virtual void PLearn::RBMMixedConnection::accumulatePosStats ( const Mat down_values,
const Mat up_values 
) [inline, virtual]

Implements PLearn::RBMConnection.

Definition at line 94 of file RBMMixedConnection.h.

References PLASSERT_MSG.

    {
        PLASSERT_MSG( false, "Not implemented" );
    }
void PLearn::RBMMixedConnection::accumulatePosStats ( const Vec down_values,
const Vec up_values 
) [virtual]

Accumulates positive phase statistics to *_pos_stats.

Implements PLearn::RBMConnection.

Definition at line 315 of file RBMMixedConnection.cc.

References i, j, and PLearn::TVec< T >::subVec().

{
    for( int i=0 ; i<n_up_blocks ; i++ )
    {
        Vec sub_up_values = up_values.subVec( up_init_positions[i],
                                              up_block_sizes[i] );

        for( int j=0 ; j<n_down_blocks ; j++ )
        {
            if( sub_connections(i,j) )
            {
                Vec sub_down_values =
                    down_values.subVec( down_init_positions[j],
                                        sub_connections(i,j)->down_size );

                sub_connections(i,j)->accumulatePosStats( sub_down_values,
                                                          sub_up_values );
            }
        }
    }

    pos_count++;
}

Here is the call graph for this function:

void PLearn::RBMMixedConnection::bpropAccUpdate ( const TVec< Mat * > &  ports_value,
const TVec< Mat * > &  ports_gradient 
) [virtual]

Perform a back propagation step (also updating parameters according to the provided gradient).

The matrices in 'ports_value' must be the same as the ones given in a previous call to 'fprop' (and thus they should in particular contain the result of the fprop computation). However, they are not necessarily the same as the ones given in the LAST call to 'fprop': if there is a need to store an internal module state, this should be done using a specific port to store this state. Each Mat* pointer in the 'ports_gradient' vector can be one of:

  • a full matrix : this is the gradient that is provided to the module, and can be used to compute other ports' gradient.
  • an empty matrix: this is a gradient we want to compute and accumulate into. This matrix must have length 0 and a width equal to the width of the corresponding matrix in the 'ports_value' vector (we can thus accumulate gradients using PLearn's ability to keep intact stored values when resizing a matrix' length).
  • a NULL pointer : this is a gradient that is not available, but does not need to be returned (or even computed). The default version tries to use the standard mini-batch bpropUpdate method, when possible.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 869 of file RBMMixedConnection.cc.

References i, PLearn::TMat< T >::isEmpty(), j, PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), PLASSERT, PLCHECK_MSG, PLearn::TMat< T >::resize(), PLearn::TMat< T >::subMatColumns(), and PLearn::TMat< T >::width().

{
    PLASSERT( ports_value.length() == nPorts()
              && ports_gradient.length() == nPorts() );

    Mat* down = ports_value[0];
    Mat* up = ports_value[1];
    Mat* down_grad = ports_gradient[0];
    Mat* up_grad = ports_gradient[1];

    PLASSERT( down && !down->isEmpty() );
    PLASSERT( up && !up->isEmpty() );

    int batch_size = down->length();
    PLASSERT( up->length() == batch_size );

    // If we have up_grad
    if( up_grad && !up_grad->isEmpty() )
    {
        // down_grad should not be provided
        PLASSERT( !down_grad || down_grad->isEmpty() );
        PLASSERT( up_grad->length() == batch_size );
        PLASSERT( up_grad->width() == up_size );

        // If we want down_grad
        bool compute_down_grad = false;
        if( down_grad && down_grad->isEmpty() )
        {
            PLASSERT( down_grad->width() == down_size );
            down_grad->resize(batch_size, down_size);
            compute_down_grad = true;
        }

        for (int j=0; j<n_down_blocks; j++)
        {
            int init_j = down_init_positions[j];
            int down_size_j = down_block_sizes[j];
            Mat sub_down = down->subMatColumns(init_j, down_size_j);
            Mat sub_down_grad;
            Mat* p_sub_down_grad = NULL;
            if( compute_down_grad )
            {
                sub_down_grad = down_grad->subMatColumns(init_j, down_size_j);
                p_sub_down_grad = &sub_down_grad;
            }

            for (int i=0; i<n_up_blocks; i++)
            {
                if(sub_connections(i,j))
                {
                    int init_i = up_init_positions[i];
                    int up_size_i = up_block_sizes[i];
                    Mat sub_up = up->subMatColumns(init_i, up_size_i);
                    Mat sub_up_grad =
                        up_grad->subMatColumns(init_i, up_size_i);

                    TVec<Mat*> sub_ports_value(2);
                    sub_ports_value[0] = &sub_down;
                    sub_ports_value[1] = &sub_up;
                    TVec<Mat*> sub_ports_gradient(2);
                    // NOT &sub_down_grad because we may want a NULL pointer
                    sub_ports_gradient[0] = p_sub_down_grad;
                    sub_ports_gradient[1] = &sub_up_grad;

                    if( compute_down_grad )
                        sub_down_grad.resize(0, down_size_j);

                    sub_connections(i,j)->bpropAccUpdate( sub_ports_value,
                                                          sub_ports_gradient );
                }
            }
        }
    }
    else if( down_grad && !down_grad->isEmpty() )
    {
        PLASSERT( down_grad->length() == batch_size );
        PLASSERT( down_grad->width() == down_size );

        // If we wand up_grad
        bool compute_up_grad = false;
        if( up_grad && up_grad->isEmpty() )
        {
            PLASSERT( up_grad->width() == up_size );
            up_grad->resize(batch_size, up_size);
            compute_up_grad = true;
        }

        for (int i=0; i<n_up_blocks; i++)
        {
            int init_i = up_init_positions[i];
            int up_size_i = up_block_sizes[i];
            Mat sub_up = up->subMatColumns(init_i, up_size_i);
            Mat sub_up_grad;
            Mat* p_sub_up_grad = NULL;
            if( compute_up_grad )
            {
                sub_up_grad = up_grad->subMatColumns(init_i, up_size_i);
                p_sub_up_grad = &sub_up_grad;
            }

            for (int j=0; j<n_down_blocks; j++)
            {
                int init_j = down_init_positions[j];
                int down_size_j = down_block_sizes[j];
                Mat sub_down = down->subMatColumns(init_j, down_size_j);
                Mat sub_down_grad =
                    down_grad->subMatColumns(init_j, down_size_j);

                TVec<Mat*> sub_ports_value(2);
                sub_ports_value[0] = &sub_down;
                sub_ports_value[1] = &sub_up;
                TVec<Mat*> sub_ports_gradient(2);
                sub_ports_gradient[0] = &sub_down_grad;
                // NOT &sub_up_grad because we may want a NULL pointer
                sub_ports_gradient[1] = p_sub_up_grad;

                if( compute_up_grad )
                    sub_up_grad.resize(0, up_size_i);

                sub_connections(i,j)->bpropAccUpdate( sub_ports_value,
                                                      sub_ports_gradient );
            }
        }
    }
    else
        PLCHECK_MSG( false,
                     "Unknown port configuration" );

}

Here is the call graph for this function:

void PLearn::RBMMixedConnection::bpropUpdate ( const Vec input,
const Vec output,
Vec input_gradient,
const Vec output_gradient,
bool  accumulate = false 
) [virtual]

Adapt based on the output gradient: this method should only be called just after a corresponding fprop; it should be called with the same arguments as fprop for the first two arguments (and output should not have been modified since then).

this version allows to obtain the input gradient as well

Since sub-classes are supposed to learn ONLINE, the object is 'ready-to-be-used' just after any bpropUpdate. N.B. A DEFAULT IMPLEMENTATION IS PROVIDED IN THE SUPER-CLASS, WHICH JUST CALLS bpropUpdate(input, output, input_gradient, output_gradient) AND IGNORES INPUT GRADIENT. this version allows to obtain the input gradient as well

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 770 of file RBMMixedConnection.cc.

References PLearn::TVec< T >::clear(), i, j, PLASSERT, PLASSERT_MSG, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), and PLearn::TVec< T >::subVec().

{
    PLASSERT( input.size() == down_size );
    PLASSERT( output.size() == up_size );
    PLASSERT( output_gradient.size() == up_size );

    if( accumulate )
    {
        PLASSERT_MSG( input_gradient.size() == down_size,
                      "Cannot resize input_gradient AND accumulate into it" );
    }
    else
    {
        input_gradient.resize( down_size );
        input_gradient.clear();
    }

    for( int j=0 ; j<n_down_blocks ; j++ )
    {
        int init_j = down_init_positions[j];
        int down_size_j = down_block_sizes[j];
        Vec sub_input = input.subVec( init_j, down_size_j );
        Vec sub_input_gradient = input_gradient.subVec( init_j, down_size_j );

        for( int i=0 ; i<n_up_blocks ; i++ )
        {
            if( sub_connections(i,j) )
            {
                int init_i = up_init_positions[i];
                int up_size_i = up_block_sizes[i];
                Vec sub_output = output.subVec( init_i, up_size_i );
                Vec sub_output_gradient = output_gradient.subVec( init_i,
                                                                  up_size_i );
                sub_connections(i,j)->bpropUpdate( sub_input, sub_output,
                                                   sub_input_gradient,
                                                   sub_output_gradient,
                                                   true );
            }
        }
    }
}

Here is the call graph for this function:

void PLearn::RBMMixedConnection::bpropUpdate ( const Mat inputs,
const Mat outputs,
Mat input_gradients,
const Mat output_gradients,
bool  accumulate = false 
) [virtual]

Batch version.

Reimplemented from PLearn::OnlineLearningModule.

Definition at line 815 of file RBMMixedConnection.cc.

References PLearn::TMat< T >::clear(), i, j, PLearn::TMat< T >::length(), PLASSERT, PLASSERT_MSG, PLearn::TMat< T >::resize(), PLearn::TMat< T >::subMatColumns(), and PLearn::TMat< T >::width().

{
    PLASSERT( inputs.width() == down_size );
    PLASSERT( outputs.width() == up_size );
    PLASSERT( output_gradients.width() == up_size );

    int batch_size = inputs.length();
    PLASSERT( outputs.length() == batch_size );
    PLASSERT( output_gradients.length() == batch_size );

    if( accumulate )
    {
        PLASSERT_MSG( input_gradients.width() == down_size &&
                      input_gradients.length() == batch_size,
                      "Cannot resize input_gradients AND accumulate into it" );
    }
    else
    {
        input_gradients.resize( batch_size, down_size );
        input_gradients.clear();
    }

    for( int j=0 ; j<n_down_blocks ; j++ )
    {
        int init_j = down_init_positions[j];
        int down_size_j = down_block_sizes[j];
        Mat sub_inputs = inputs.subMatColumns( init_j, down_size_j );
        Mat sub_input_gradients = input_gradients.subMatColumns( init_j,
                                                                 down_size_j );

        for( int i=0 ; i<n_up_blocks ; i++ )
        {
            if( sub_connections(i,j) )
            {
                int init_i = up_init_positions[i];
                int up_size_i = up_block_sizes[i];
                Mat sub_outputs = outputs.subMatColumns( init_i, up_size_i );
                Mat sub_output_gradients =
                    output_gradients.subMatColumns( init_i, up_size_i );
                sub_connections(i,j)->bpropUpdate( sub_inputs, sub_outputs,
                                                   sub_input_gradients,
                                                   sub_output_gradients,
                                                   true );
            }
        }
    }
}

Here is the call graph for this function:

void PLearn::RBMMixedConnection::build ( ) [virtual]

Post-constructor.

The normal implementation should call simply inherited::build(), then this class's build_(). This method should be callable again at later times, after modifying some option fields to change the "architecture" of the object.

Reimplemented from PLearn::RBMConnection.

Definition at line 192 of file RBMMixedConnection.cc.

void PLearn::RBMMixedConnection::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::RBMConnection.

Definition at line 105 of file RBMMixedConnection.cc.

References i, j, and PLASSERT.

{
    up_size = 0;
    down_size = 0;

    n_up_blocks = sub_connections.length();
    n_down_blocks = sub_connections.width();

    if( n_up_blocks == 0 || n_down_blocks == 0 )
        return;

    up_init_positions.resize( n_up_blocks );
    up_block_sizes.resize( n_up_blocks );
    down_init_positions.resize( n_down_blocks );
    down_block_sizes.resize( n_down_blocks );
    row_of.resize( 0 );
    col_of.resize( 0 );

    // size equality check
    for( int i=0 ; i<n_up_blocks ; i++ )
    {
        up_block_sizes[i] = 0;
        for( int j=0 ; j<n_down_blocks ; j++ )
        {
            if( sub_connections(i,j) )
            {
                if( up_block_sizes[i] == 0 ) // first non-null sub_connection
                    up_block_sizes[i] = sub_connections(i,j)->up_size;
                else
                    PLASSERT( sub_connections(i,j)->up_size ==
                            up_block_sizes[i] );
            }
        }
        up_init_positions[i] = up_size;
        up_size += up_block_sizes[i];
        row_of.append( TVec<int>( up_block_sizes[i], i ) );
    }

    for( int j=0 ; j<n_down_blocks ; j++ )
    {
        down_block_sizes[j] = 0;
        for( int i=0 ; i<n_up_blocks ; i++ )
        {
            if( sub_connections(i,j) )
            {
                if( down_block_sizes[j] == 0 ) // first non-null sub_connection
                    down_block_sizes[j] = sub_connections(i,j)->down_size;
                else
                    PLASSERT( sub_connections(i,j)->down_size ==
                            down_block_sizes[j] );
            }
        }

        down_init_positions[j] = down_size;
        down_size += down_block_sizes[j];
        col_of.append( TVec<int>( down_block_sizes[j], j ) );
    }

    // Assign learning rate and momentum to sub_connections
    // If we have a random_gen and they do not, share it with them
    for( int i=0 ; i<n_up_blocks ; i++ )
        for( int j=0 ; j<n_down_blocks ; j++ )
        {
            if( sub_connections(i,j) )
            {
                if( learning_rate >= 0. )
                    sub_connections(i,j)->setLearningRate( learning_rate );

                if( momentum >= 0. )
                    sub_connections(i,j)->setMomentum( momentum );

                if( random_gen && !(sub_connections(i,j)->random_gen) )
                {
                    sub_connections(i,j)->random_gen = random_gen;
                    sub_connections(i,j)->forget();
                }
            }
        }

    // for OnlineLearningModule interface
    input_size = down_size;
    output_size = up_size;
}
string PLearn::RBMMixedConnection::classname ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 50 of file RBMMixedConnection.cc.

void PLearn::RBMMixedConnection::clearStats ( ) [virtual]

Clear all information accumulated during stats.

Implements PLearn::RBMConnection.

Definition at line 442 of file RBMMixedConnection.cc.

References i, and j.

{
    for( int i=0 ; i<n_up_blocks ; i++ )
        for( int j=0 ; j<n_down_blocks ; j++ )
            if( sub_connections(i,j) )
                sub_connections(i,j)->clearStats();

    pos_count = 0;
    neg_count = 0;
}
void PLearn::RBMMixedConnection::computeProduct ( int  start,
int  length,
const Vec activations,
bool  accumulate = false 
) const [virtual]

Computes the vectors of activation of "length" units, starting from "start", and stores them into "activations".

"start" indexes an up unit if "going_up", else a down unit.

Implements PLearn::RBMConnection.

Definition at line 453 of file RBMMixedConnection.cc.

References PLearn::TVec< T >::fill(), i, j, PLearn::TVec< T >::length(), PLASSERT, and PLearn::TVec< T >::subVec().

{
    PLASSERT( activations.length() == length );

    if( !accumulate )
        activations.subVec( start, length ).fill( 0. );

    if( going_up )
    {
        PLASSERT( start+length <= up_size );

        int init_row = row_of[start];
        int end_row = row_of[start+length-1];

        if( init_row == end_row )
        {
            int start_in_row = start - up_init_positions[init_row];

            for( int j=0 ; j<n_down_blocks ; j++ )
            {
                if( sub_connections(init_row,j) )
                {
                    sub_connections(init_row,j)->computeProduct(
                        start_in_row, length, activations, true );
                }
            }
        }
        else
        {
            // partial computation on init_row
            int start_in_init_row = start - up_init_positions[init_row];
            int len_in_init_row = up_init_positions[init_row+1]
                                    - start_in_init_row;
            int cur_pos = 0;

            Vec sub_activations = activations.subVec( cur_pos,
                                                      len_in_init_row );
            cur_pos += len_in_init_row;
            for( int j=0 ; j<n_down_blocks ; j++ )
            {
                if( sub_connections(init_row,j) )
                {
                    sub_connections(init_row,j)->computeProduct(
                        start_in_init_row, len_in_init_row,
                        sub_activations, true );
                }
            }

            // full computation for init_row < i < end_row
            for( int i=init_row+1 ; i<end_row ; i++ )
            {
                int up_size_i = up_block_sizes[i];
                sub_activations = activations.subVec( cur_pos, up_size_i );
                cur_pos += up_size_i;
                for( int j=0 ; j<n_down_blocks ; j++ )
                {
                    if( sub_connections(i,j) )
                    {
                        sub_connections(i,j)->computeProduct(
                            0, up_size_i, sub_activations, true );
                    }
                }

            }

            // partial computation on end_row
            int len_in_end_row = start+length - up_init_positions[end_row];
            sub_activations = activations.subVec( cur_pos, len_in_end_row );
            cur_pos += len_in_end_row;
            for( int j=0 ; j<n_down_blocks ; j++ )
            {
                if( sub_connections(end_row,j) )
                {
                    sub_connections(end_row,j)->computeProduct(
                        0, len_in_end_row, sub_activations, true );
                }
            }
        }
    }
    else
    {
        PLASSERT( start+length <= down_size );

        int init_col = col_of[start];
        int end_col = col_of[start+length-1];

        if( init_col == end_col )
        {
            int start_in_col = start - down_init_positions[init_col];

            for( int i=0 ; i<n_up_blocks ; i++ )
            {
                if( sub_connections(i,init_col) )
                {
                    sub_connections(i,init_col)->computeProduct(
                        start_in_col, length, activations, true );
                }
            }
        }
        else
        {
            // partial computation on init_col
            int start_in_init_col = start - down_init_positions[init_col];
            int len_in_init_col = down_init_positions[init_col+1]
                                    - start_in_init_col;
            int cur_pos = 0;

            Vec sub_activations = activations.subVec( cur_pos,
                                                      len_in_init_col );
            cur_pos += len_in_init_col;
            for( int i=0 ; i<n_up_blocks ; i++ )
            {
                if( sub_connections(i,init_col) )
                {
                    sub_connections(i,init_col)->computeProduct(
                        start_in_init_col, len_in_init_col,
                        sub_activations, true );
                }
            }

            // full computation for init_col < j < end_col
            for( int j=init_col+1 ; j<end_col ; j++ )
            {
                int down_size_j = down_block_sizes[j];
                sub_activations = activations.subVec( cur_pos, down_size_j );
                cur_pos += down_size_j;
                for( int i=0 ; i<n_up_blocks ; i++ )
                {
                    if( sub_connections(i,j) )
                    {
                        sub_connections(i,j)->computeProduct(
                            0, down_size_j, sub_activations, true );
                    }
                }

            }

            // partial computation on end_row
            int len_in_end_col = start+length - down_init_positions[end_col];
            sub_activations = activations.subVec( cur_pos, len_in_end_col );
            cur_pos += len_in_end_col;
            for( int i=0 ; i<n_up_blocks ; i++ )
            {
                if( sub_connections(i,end_col) )
                {
                    sub_connections(i,end_col)->computeProduct(
                        0, len_in_end_col, sub_activations, true );
                }
            }
        }
    }
}

Here is the call graph for this function:

void PLearn::RBMMixedConnection::computeProducts ( int  start,
int  length,
Mat activations,
bool  accumulate = false 
) const [virtual]

Same as 'computeProduct' but for mini-batches.

Implements PLearn::RBMConnection.

Definition at line 609 of file RBMMixedConnection.cc.

References PLearn::TMat< T >::clear(), i, j, PLASSERT, PLearn::TMat< T >::resize(), PLearn::TMat< T >::subMatColumns(), and PLearn::TMat< T >::width().

{
    PLASSERT( activations.width() == length );
    activations.resize(inputs_mat.length(), length);

    if( !accumulate )
        activations.subMatColumns( start, length ).clear();

    if( going_up )
    {
        PLASSERT( start+length <= up_size );

        int init_row = row_of[start];
        int end_row = row_of[start+length-1];

        if( init_row == end_row )
        {
            int start_in_row = start - up_init_positions[init_row];

            for( int j=0 ; j<n_down_blocks ; j++ )
            {
                if( sub_connections(init_row,j) )
                {
                    sub_connections(init_row,j)->computeProducts(
                        start_in_row, length, activations, true );
                }
            }
        }
        else
        {
            // partial computation on init_row
            int start_in_init_row = start - up_init_positions[init_row];
            int len_in_init_row = up_init_positions[init_row+1]
                                    - start_in_init_row;
            int cur_pos = 0;

            Mat sub_activations = activations.subMatColumns( cur_pos,
                                                             len_in_init_row );
            cur_pos += len_in_init_row;
            for( int j=0 ; j<n_down_blocks ; j++ )
            {
                if( sub_connections(init_row,j) )
                {
                    sub_connections(init_row,j)->computeProducts(
                        start_in_init_row, len_in_init_row,
                        sub_activations, true );
                }
            }

            // full computation for init_row < i < end_row
            for( int i=init_row+1 ; i<end_row ; i++ )
            {
                int up_size_i = up_block_sizes[i];
                sub_activations = activations.subMatColumns( cur_pos,
                                                             up_size_i );
                cur_pos += up_size_i;
                for( int j=0 ; j<n_down_blocks ; j++ )
                {
                    if( sub_connections(i,j) )
                    {
                        sub_connections(i,j)->computeProducts(
                            0, up_size_i, sub_activations, true );
                    }
                }

            }

            // partial computation on end_row
            int len_in_end_row = start+length - up_init_positions[end_row];
            sub_activations = activations.subMatColumns( cur_pos,
                                                         len_in_end_row );
            cur_pos += len_in_end_row;
            for( int j=0 ; j<n_down_blocks ; j++ )
            {
                if( sub_connections(end_row,j) )
                {
                    sub_connections(end_row,j)->computeProducts(
                        0, len_in_end_row, sub_activations, true );
                }
            }
        }
    }
    else
    {
        PLASSERT( start+length <= down_size );

        int init_col = col_of[start];
        int end_col = col_of[start+length-1];

        if( init_col == end_col )
        {
            int start_in_col = start - down_init_positions[init_col];

            for( int i=0 ; i<n_up_blocks ; i++ )
            {
                if( sub_connections(i,init_col) )
                {
                    sub_connections(i,init_col)->computeProducts(
                        start_in_col, length, activations, true );
                }
            }
        }
        else
        {
            // partial computation on init_col
            int start_in_init_col = start - down_init_positions[init_col];
            int len_in_init_col = down_init_positions[init_col+1]
                                    - start_in_init_col;
            int cur_pos = 0;

            Mat sub_activations = activations.subMatColumns( cur_pos,
                                                             len_in_init_col );
            cur_pos += len_in_init_col;
            for( int i=0 ; i<n_up_blocks ; i++ )
            {
                if( sub_connections(i,init_col) )
                {
                    sub_connections(i,init_col)->computeProducts(
                        start_in_init_col, len_in_init_col,
                        sub_activations, true );
                }
            }

            // full computation for init_col < j < end_col
            for( int j=init_col+1 ; j<end_col ; j++ )
            {
                int down_size_j = down_block_sizes[j];
                sub_activations = activations.subMatColumns( cur_pos,
                                                             down_size_j );
                cur_pos += down_size_j;
                for( int i=0 ; i<n_up_blocks ; i++ )
                {
                    if( sub_connections(i,j) )
                    {
                        sub_connections(i,j)->computeProducts(
                            0, down_size_j, sub_activations, true );
                    }
                }

            }

            // partial computation on end_row
            int len_in_end_col = start+length - down_init_positions[end_col];
            sub_activations = activations.subMatColumns( cur_pos,
                                                         len_in_end_col );
            cur_pos += len_in_end_col;
            for( int i=0 ; i<n_up_blocks ; i++ )
            {
                if( sub_connections(i,end_col) )
                {
                    sub_connections(i,end_col)->computeProducts(
                        0, len_in_end_col, sub_activations, true );
                }
            }
        }
    }
}

Here is the call graph for this function:

void PLearn::RBMMixedConnection::declareOptions ( OptionList ol) [static, protected]

Declares the class options.

Reimplemented from PLearn::RBMConnection.

Definition at line 61 of file RBMMixedConnection.cc.

References PLearn::OptionBase::buildoption, PLearn::declareOption(), down_init_positions, PLearn::RBMConnection::down_size, PLearn::RBMConnection::initialization_method, PLearn::OptionBase::learntoption, n_down_blocks, n_up_blocks, PLearn::OptionBase::nosave, PLearn::redeclareOption(), sub_connections, up_init_positions, and PLearn::RBMConnection::up_size.

{
    declareOption(ol, "sub_connections", &RBMMixedConnection::sub_connections,
                  OptionBase::buildoption,
                  "Matrix containing the sub-transformations (blocks).");

    declareOption(ol, "up_init_positions",
                  &RBMMixedConnection::up_init_positions,
                  OptionBase::learntoption,
                  "Initial vertical index of the blocks.");

    declareOption(ol, "down_init_positions",
                  &RBMMixedConnection::down_init_positions,
                  OptionBase::learntoption,
                  "Initial horizontal index of the blocks.");

    declareOption(ol, "n_up_blocks", &RBMMixedConnection::n_up_blocks,
                  OptionBase::learntoption,
                  "Length of the blocks matrix.");

    declareOption(ol, "n_down_blocks", &RBMMixedConnection::n_down_blocks,
                  OptionBase::learntoption,
                  "Width of the blocks matrix.");

    // Now call the parent class' declareOptions
    inherited::declareOptions(ol);

    redeclareOption(ol, "down_size", &RBMMixedConnection::down_size,
                    OptionBase::learntoption,
                    "It is computed from the sizes of the sub-blocks.");

    redeclareOption(ol, "up_size", &RBMMixedConnection::up_size,
                    OptionBase::learntoption,
                    "It is computed from the sizes of the sub-blocks.");

    redeclareOption(ol, "initialization_method",
                    &RBMMixedConnection::initialization_method,
                    OptionBase::nosave,
                    "initialization_method is useless here.");
}

Here is the call graph for this function:

static const PPath& PLearn::RBMMixedConnection::declaringFile ( ) [inline, static]

Reimplemented from PLearn::RBMConnection.

Definition at line 193 of file RBMMixedConnection.h.

:

RBMMixedConnection * PLearn::RBMMixedConnection::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::RBMConnection.

Definition at line 50 of file RBMMixedConnection.cc.

void PLearn::RBMMixedConnection::forget ( ) [virtual]

reset the parameters to the state they would be BEFORE starting training.

Note that this method is necessarily called from build().

Implements PLearn::OnlineLearningModule.

Definition at line 1003 of file RBMMixedConnection.cc.

References i, j, and PLWARNING.

{
    clearStats();

    if( !random_gen )
    {
        PLWARNING("RBMMixedConnection: cannot forget() without random_gen");
        return;
    }
    for( int i=0 ; i<n_up_blocks ; i++ )
        for( int j=0 ; j<n_down_blocks ; j++ )
            if( sub_connections(i,j) )
            {
                if( !(sub_connections(i,j)->random_gen) )
                    sub_connections(i,j)->random_gen = random_gen;
                sub_connections(i,j)->forget();
            }
}
OptionList & PLearn::RBMMixedConnection::getOptionList ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 50 of file RBMMixedConnection.cc.

OptionMap & PLearn::RBMMixedConnection::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 50 of file RBMMixedConnection.cc.

RemoteMethodMap & PLearn::RBMMixedConnection::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 50 of file RBMMixedConnection.cc.

void PLearn::RBMMixedConnection::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Transforms a shallow copy into a deep copy.

Reimplemented from PLearn::RBMConnection.

Definition at line 202 of file RBMMixedConnection.cc.

References PLearn::deepCopyField().

Here is the call graph for this function:

Vec PLearn::RBMMixedConnection::makeParametersPointHere ( const Vec global_parameters) [virtual]

Make the parameters data be sub-vectors of the given global_parameters.

The argument should have size >= nParameters. The result is a Vec that starts just after this object's parameters end, i.e. result = global_parameters.subVec(nParameters(),global_parameters.size()-nParameters()); This allows to easily chain calls of this method on multiple RBMParameters.

Implements PLearn::RBMConnection.

Definition at line 1044 of file RBMMixedConnection.cc.

{
    return global_parameters;
}
int PLearn::RBMMixedConnection::nParameters ( ) const [virtual]

optionally perform some processing after training, or after a series of fprop/bpropUpdate calls to prepare the model for truly out-of-sample operation.

return the number of parameters

THE DEFAULT IMPLEMENTATION PROVIDED IN THE SUPER-CLASS DOES NOT DO ANYTHING. return the number of parameters

Implements PLearn::RBMConnection.

Definition at line 1034 of file RBMMixedConnection.cc.

{
    return 0;
}
void PLearn::RBMMixedConnection::setAsDownInput ( const Vec input) const [virtual]

Sets 'input_vec' to 'input', and 'going_up' to true.

Note that no data copy is made, so 'input' should not be modified afterwards.

Reimplemented from PLearn::RBMConnection.

Definition at line 281 of file RBMMixedConnection.cc.

References i, j, and PLearn::TVec< T >::subVec().

{
    inherited::setAsDownInput( input );

    for( int j=0 ; j<n_down_blocks ; j++ )
    {
        Vec sub_input = input.subVec( down_init_positions[j],
                                      down_block_sizes[j] );

        for( int i=0 ; i<n_up_blocks ; i++ )
            if( sub_connections(i,j) )
                sub_connections(i,j)->setAsDownInput( sub_input );
    }
}

Here is the call graph for this function:

void PLearn::RBMMixedConnection::setAsDownInputs ( const Mat inputs) const [virtual]

Set 'inputs_mat' to 'inputs', and 'going_up' to true.

Note that no data copy is made, so 'inputs' should not be modified afterwards.

Reimplemented from PLearn::RBMConnection.

Definition at line 299 of file RBMMixedConnection.cc.

References i, j, and PLearn::TMat< T >::subMatColumns().

{
    inherited::setAsDownInputs( inputs );

    for( int j=0 ; j<n_down_blocks ; j++ )
    {
        Mat sub_inputs = inputs.subMatColumns( down_init_positions[j],
                                               down_block_sizes[j] );

        for( int i=0 ; i<n_up_blocks ; i++ )
            if( sub_connections(i,j) )
                sub_connections(i,j)->setAsDownInputs( sub_inputs );
    }
}

Here is the call graph for this function:

void PLearn::RBMMixedConnection::setAsUpInput ( const Vec input) const [virtual]

Sets 'input_vec' to 'input', and 'going_up' to false.

Note that no data copy is made, so 'input' should not be modified afterwards.

Reimplemented from PLearn::RBMConnection.

Definition at line 245 of file RBMMixedConnection.cc.

References i, j, and PLearn::TVec< T >::subVec().

{
    inherited::setAsUpInput( input );

    for( int i=0 ; i<n_up_blocks ; i++ )
    {
        Vec sub_input = input.subVec( up_init_positions[i],
                                      up_block_sizes[i] );

        for( int j=0 ; j<n_down_blocks ; j++ )
            if( sub_connections(i,j) )
                sub_connections(i,j)->setAsUpInput( sub_input );
    }
}

Here is the call graph for this function:

void PLearn::RBMMixedConnection::setAsUpInputs ( const Mat inputs) const [virtual]

Set 'inputs_mat' to 'inputs', and 'going_up' to false.

Note that no data copy is made, so 'inputs' should not be modified afterwards.

Reimplemented from PLearn::RBMConnection.

Definition at line 263 of file RBMMixedConnection.cc.

References i, j, and PLearn::TMat< T >::subMatColumns().

{
    inherited::setAsUpInputs( inputs );

    for( int i=0 ; i<n_up_blocks ; i++ )
    {
        Mat sub_inputs = inputs.subMatColumns( up_init_positions[i],
                                               up_block_sizes[i] );

        for( int j=0 ; j<n_down_blocks ; j++ )
            if( sub_connections(i,j) )
                sub_connections(i,j)->setAsUpInputs( sub_inputs );
    }
}

Here is the call graph for this function:

void PLearn::RBMMixedConnection::setLearningRate ( real  the_learning_rate) [virtual]

Sets the learning rate, also in the sub_connections.

Reimplemented from PLearn::RBMConnection.

Definition at line 219 of file RBMMixedConnection.cc.

References i, and j.

{
    inherited::setLearningRate( the_learning_rate );

    for( int i=0 ; i<n_up_blocks ; i++ )
        for( int j=0 ; j<n_down_blocks ; j++ )
            if( sub_connections(i,j) )
                sub_connections(i,j)->setLearningRate( the_learning_rate );
}
void PLearn::RBMMixedConnection::setMomentum ( real  the_momentum) [virtual]

Sets the momentum, also in the sub_connections.

Reimplemented from PLearn::RBMConnection.

Definition at line 232 of file RBMMixedConnection.cc.

References i, and j.

{
    inherited::setMomentum( the_momentum );

    for( int i=0 ; i<n_up_blocks ; i++ )
        for( int j=0 ; j<n_down_blocks ; j++ )
            if( sub_connections(i,j) )
                sub_connections(i,j)->setMomentum( the_momentum );
}
void PLearn::RBMMixedConnection::update ( const Vec pos_down_values,
const Vec pos_up_values,
const Vec neg_down_values,
const Vec neg_up_values 
) [virtual]

Updates parameters according to contrastive divergence gradient, not using the statistics but the explicit values passed.

Reimplemented from PLearn::RBMConnection.

Definition at line 377 of file RBMMixedConnection.cc.

References i, j, and PLearn::TVec< T >::subVec().

{
    for( int i=0 ; i<n_up_blocks ; i++ )
    {
        int up_begin = up_init_positions[i];
        int up_length = up_block_sizes[i];
        Vec sub_pos_up_values = pos_up_values.subVec( up_begin, up_length );
        Vec sub_neg_up_values = neg_up_values.subVec( up_begin, up_length );
        for( int j=0 ; j<n_down_blocks ; j++ )
        {
            if( sub_connections(i,j) )
            {
                int down_begin = down_init_positions[j];
                int down_length = sub_connections(i,j)->down_size;
                Vec sub_pos_down_values = pos_down_values.subVec( down_begin,
                                                                  down_length );
                Vec sub_neg_down_values = neg_down_values.subVec( down_begin,
                                                                  down_length );

                sub_connections(i,j)->update( sub_pos_down_values,
                                              sub_pos_up_values,
                                              sub_neg_down_values,
                                              sub_neg_up_values );
            }
        }
    }
}

Here is the call graph for this function:

void PLearn::RBMMixedConnection::update ( const Mat pos_down_values,
const Mat pos_up_values,
const Mat neg_down_values,
const Mat neg_up_values 
) [virtual]

Not implemented.

Reimplemented from PLearn::RBMConnection.

Definition at line 409 of file RBMMixedConnection.cc.

References i, j, and PLearn::TMat< T >::subMatColumns().

{
    for( int i=0 ; i<n_up_blocks ; i++ )
    {
        int up_begin = up_init_positions[i];
        int up_length = up_block_sizes[i];
        Mat sub_pos_up_values = pos_up_values.subMatColumns( up_begin,
                                                             up_length );
        Mat sub_neg_up_values = neg_up_values.subMatColumns( up_begin,
                                                             up_length );
        for( int j=0 ; j<n_down_blocks ; j++ )
        {
            if( sub_connections(i,j) )
            {
                int down_begin = down_init_positions[j];
                int down_length = sub_connections(i,j)->down_size;
                Mat sub_pos_down_values =
                    pos_down_values.subMatColumns( down_begin, down_length );
                Mat sub_neg_down_values =
                    neg_down_values.subMatColumns( down_begin, down_length );

                sub_connections(i,j)->update( sub_pos_down_values,
                                              sub_pos_up_values,
                                              sub_neg_down_values,
                                              sub_neg_up_values );
            }
        }
    }
}

Here is the call graph for this function:

void PLearn::RBMMixedConnection::update ( ) [virtual]

Updates parameters according to contrastive divergence gradient.

Implements PLearn::RBMConnection.

Definition at line 365 of file RBMMixedConnection.cc.

References i, and j.

{
    for( int i=0 ; i<n_up_blocks ; i++ )
        for( int j=0 ; j<n_down_blocks ; j++ )
            if( sub_connections(i,j) )
                sub_connections(i,j)->update();

    clearStats();
}

Member Data Documentation

Reimplemented from PLearn::RBMConnection.

Definition at line 193 of file RBMMixedConnection.h.

Definition at line 216 of file RBMMixedConnection.h.

Horizontal sizes of the blocks.

Definition at line 213 of file RBMMixedConnection.h.

Initial horizontal index of the blocks.

Definition at line 210 of file RBMMixedConnection.h.

Referenced by declareOptions().

sub_connections.width()

Definition at line 222 of file RBMMixedConnection.h.

Referenced by declareOptions().

sub_connections.length()

Definition at line 219 of file RBMMixedConnection.h.

Referenced by declareOptions().

Definition at line 215 of file RBMMixedConnection.h.

Matrix containing the sub-transformations (blocks)

Definition at line 66 of file RBMMixedConnection.h.

Referenced by declareOptions().

Vertical sizes of the blocks.

Definition at line 207 of file RBMMixedConnection.h.

Initial vertical index of the blocks.

Definition at line 204 of file RBMMixedConnection.h.

Referenced by declareOptions().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines