PLearn 0.1
Public Member Functions | Static Public Member Functions | Public Attributes | Static Public Attributes | Static Protected Member Functions | Protected Attributes | Private Types | Private Member Functions
PLearn::DenoisingRecurrentNet Class Reference

Model made of RBMs linked through time. More...

#include <DenoisingRecurrentNet.h>

Inheritance diagram for PLearn::DenoisingRecurrentNet:
Inheritance graph
[legend]
Collaboration diagram for PLearn::DenoisingRecurrentNet:
Collaboration graph
[legend]

List of all members.

Public Member Functions

void encodeSequence (Mat sequence, Mat &encoded_seq) const
 encodes sequence according to specified encoding option (declared const because it needs to be called in test)
void inject_zero_forcing_noise (Mat sequence, double noise_prob) const
void inject_zero_forcing_noise (Vec sequence, double noise_prob) const
 DenoisingRecurrentNet ()
 Default constructor.
virtual int outputsize () const
 Returns the size of this learner's output, (which typically may depend on its inputsize(), targetsize() and set options).
void setTrainingSet (VMat training_set, bool call_forget=true)
 Declares the training set.
virtual void forget ()
 (Re-)initializes the PLearner in its fresh state (that state may depend on the 'seed' option) and sets 'stage' back to 0 (this is the stage of a fresh learner!).
virtual void train ()
 The role of the train method is to bring the learner up to stage==nstages, updating the train_stats collector with training costs measured on-line in the process.
void setLearningRate (real the_learning_rate)
 Sets the learning of all layers and connections Remembers it by copying value to current_learning_rate.
virtual void computeOutput (const Vec &input, Vec &output) const
 Computes the output from the input.
virtual void computeCostsFromOutputs (const Vec &input, const Vec &output, const Vec &target, Vec &costs) const
 Computes the costs from already computed output.
virtual TVec< std::string > getTestCostNames () const
 Returns the names of the costs computed by computeCostsFromOutpus (and thus the test method).
int nSequences () const
 Returns the number of sequences in the training_set.
void getSequence (int i, Mat &seq) const
 Returns the ith sequence.
void generate (int t, int n)
 Generate music in a folder.
void generateArtificial ()
 Generate music in a folder.
virtual TVec< std::string > getTrainCostNames () const
 Returns the names of the objective costs that the train method computes and for which it updates the VecStatsCollector train_stats.
void partition (TVec< double > part, TVec< double > periode, TVec< double > vel) const
 Use the partition.
void clamp_units (const Vec layer_vector, PP< RBMLayer > layer, TVec< int > symbol_sizes) const
 Clamps the layer units based on a layer vector.
void clamp_units (const Vec layer_vector, PP< RBMLayer > layer, TVec< int > symbol_sizes, const Vec original_mask, Vec &formated_mask) const
 Clamps the layer units based on a layer vector and provides the associated mask in the correct format.
void recurrentUpdate (real input_reconstruction_weight, real hidden_reconstruction_cost_weight, real temporal_gradient_contribution, real prediction_cost_weight, real inputAndDynamicPart, Vec train_costs, Vec train_n_items)
 Updates both the RBM parameters and the dynamic connections in the recurrent tuning phase, after the visible units have been clamped.
virtual void test (VMat testset, PP< VecStatsCollector > test_stats, VMat testoutputs=0, VMat testcosts=0) const
 Performs test on testset, updating test cost statistics, and optionally filling testoutputs and testcosts.
virtual string classname () const
virtual OptionListgetOptionList () const
virtual OptionMapgetOptionMap () const
virtual RemoteMethodMapgetRemoteMethodMap () const
virtual DenoisingRecurrentNetdeepCopy (CopiesMap &copies) const
virtual void build ()
 Finish building the object; just call inherited::build followed by build_()
virtual void makeDeepCopyFromShallowCopy (CopiesMap &copies)
 Transforms a shallow copy into a deep copy.

Static Public Member Functions

static void locateSequenceBoundaries (VMat dataset, TVec< int > &boundaries, real end_of_sequence_symbol)
static void encode_onehot_diffNote_duration (Mat sequence, Mat &encoded_sequence, bool use_silence, int duration_nbits=20)
static void encode_onehot_note_octav_duration (Mat sequence, Mat &encoded_sequence, int prepend_zero_rows, bool use_silence, int octav_nbits, int duration_nbits=20)
static void encode_onehot_timeframe (Mat sequence, Mat &encoded_sequence, int prepend_zero_rows, bool use_silence=false)
static int duration_to_number_of_timeframes (int duration)
static int getDurationBit (int duration)
static Vec getInputWindow (Mat sequence, int startpos, int winsize)
static void getNoteAndOctave (int midi_number, int &note, int &octave)
static string _classname_ ()
static OptionList_getOptionList_ ()
static RemoteMethodMap_getRemoteMethodMap_ ()
static Object_new_instance_for_typemap_ ()
static bool _isa_ (const Object *o)
static void _static_initialize_ ()
static const PPathdeclaringFile ()

Public Attributes

Vec target_layers_weights
 The training weights of each target layers.
bool use_target_layers_masks
 Indication that a mask indicating which target to predict is present in the input part of the VMatrix dataset.
real end_of_sequence_symbol
 Value of the first input component for end-of-sequence delimiter.
PP< RBMLayerinput_layer
 The input layer of the model.
TVec< PP< RBMLayer > > target_layers
 The target layers of the model.
PP< RBMLayerhidden_layer
 The hidden layer of the model.
PP< RBMLayerhidden_layer2
 The second hidden layer of the model (optional)
PP< RBMConnectiondynamic_connections
 The RBMConnection between the first hidden layers, through time.
PP< RBMConnectiondynamic_reconstruction_connections
 The RBMConnection for the reconstruction between the hidden layers, through time.
PP< RBMConnectionhidden_connections
 The RBMConnection between the first and second hidden layers (optional)
PP< RBMConnectioninput_connections
 The RBMConnection from input_layer to hidden_layer.
TVec< PP< RBMConnection > > target_connections
 The RBMConnection from input_layer to hidden_layer.
TVec< inttarget_layers_n_of_target_elements
 Number of elements in the target part of a VMatrix associated to each target layer.
TVec< intinput_symbol_sizes
 Number of symbols for each symbolic field of train_set.
TVec< TVec< int > > target_symbol_sizes
 Number of symbols for each symbolic field of train_set.
string encoding
 Chooses what type of encoding to apply to an input sequence Possibilities: "timeframe", "note_duration", "note_octav_duration", "raw_masked_supervised".
bool noise
real L1_penalty_factor
 Optional (default=0) factor of L1 regularization term.
real L2_penalty_factor
 Optional (default=0) factor of L2 regularization term.
int input_window_size
 Input window size.
bool tied_input_reconstruction_weights
double input_noise_prob
double input_reconstruction_lr
double hidden_noise_prob
double hidden_reconstruction_lr
bool tied_hidden_reconstruction_weights
double noisy_recurrent_lr
double dynamic_gradient_scale_factor
double recurrent_lr
Vec mean_encoded_vec
Vec input_reconstruction_bias
Vec hidden_reconstruction_bias
Vec hidden_reconstruction_bias2
double prediction_cost_weight
double input_reconstruction_cost_weight
double hidden_reconstruction_cost_weight
double nb_stage_reconstruction
double nb_stage_target

Static Public Attributes

static StaticInitializer _static_initializer_

Static Protected Member Functions

static void declareOptions (OptionList &ol)
 Declares the class options.

Protected Attributes

double current_learning_rate
PP< AutoVMatrixdata
 Store external data;.
TVec< Matacc_target_connections_gr
Mat acc_input_connections_gr
Mat acc_dynamic_connections_gr
Mat acc_reconstruction_dynamic_connections_gr
Vec acc_target_bias_gr
 Stores accumulate target bias gradient.
Vec acc_hidden_bias_gr
 Stores accumulate hidden bias gradient.
Vec acc_recons_bias_gr
 Stores accumulate reconstruction bias gradient.
Vec bias_gradient
 Stores bias gradient.
Vec visi_bias_gradient
 Stores bias gradient.
Vec hidden_gradient
 Stores hidden gradient of dynamic connections.
Vec hidden_temporal_gradient
 Stores hidden gradient of dynamic connections coming from time t+1.
Mat hidden_list
 List of hidden layers values.
Mat hidden_act_no_bias_list
Mat hidden2_list
 List of second hidden layers values.
Mat hidden2_act_no_bias_list
TVec< Mattarget_prediction_list
 List of target prediction values.
TVec< Mattarget_prediction_act_no_bias_list
TVec< Vecinput_list
 List of inputs values.
TVec< Mattargets_list
 List of inputs values.
Mat nll_list
 List of the nll of the input samples in a sequence.
TVec< Matmasks_list
 List of all targets' masks.
Vec dynamic_act_no_bias_contribution
 Contribution of dynamic weights to hidden layer activation.
TVec< inttrainset_boundaries
TVec< inttestset_boundaries
Mat seq
Mat encoded_seq
Mat clean_encoded_seq
Vec input_reconstruction_prob
Vec hidden_reconstruction_prob

Private Types

typedef PLearner inherited

Private Member Functions

void build_ ()
 This does the actual building.
void applyMultipleSoftmaxToInputWindow (Vec input_reconstruction_activation, Vec input_reconstruction_prob)
void recurrentFprop (Vec train_costs, Vec train_n_items, bool useDynamicConnections=true) const
void encodeSequenceAndPopulateLists (Mat seq, bool doNoise) const
 does encoding if needed and populates the list.
void encodeAndCreateSupervisedSequence (Mat seq) const
 encodes seq, then populates: inputslist, targets_list, masks_list
void encodeAndCreateSupervisedSequence2 (Mat seq) const
void splitRawMaskedSupervisedSequence (Mat seq, bool doNoise) const
 For the (backward testing) raw_masked_supervised case. Populates: input_list, targets_list, masks_list.
void encode_artificialData (Mat seq) const
void resize_lists (int l) const
void trainUnconditionalPredictor ()
void unconditionalFprop (Vec train_costs, Vec train_n_items) const
Mat getTargetConnectionsWeightMatrix (int tar)
Mat getInputConnectionsWeightMatrix ()
Mat getDynamicConnectionsWeightMatrix ()
Mat getDynamicReconstructionConnectionsWeightMatrix ()
void updateTargetLayer (Vec &grad, Vec &bias, real &lr)
void bpropUpdateConnection (const Vec &input, const Vec &output, Vec &input_gradient, const Vec &output_gradient, Mat &weights, Mat &acc_weights_gr, int &down_size, int &up_size, real &lr, bool accumulate, bool using_penalty_factor)
void bpropUpdateHiddenLayer (const Vec &input, const Vec &output, Vec &input_gradient, const Vec &output_gradient, Vec &bias, real &lr)
void applyWeightPenalty (Mat &weights, Mat &acc_weights_gr, int &down_size, int &up_size, real &lr)
double fpropUpdateInputReconstructionFromHidden (Vec hidden, Mat &reconstruction_weights, Mat &acc_weights_gr, Vec &input_reconstruction_bias, Vec &input_reconstruction_prob, Vec clean_input, Vec hidden_gradient, double input_reconstruction_cost_weight, double lr)
 Builds input_reconstruction_prob from hidden (using reconstruction_weights which is nhidden x ninputs, and input_reconstruction_bias) then backpropagates reconstruction cost (after comparison with clean_input) with learning rate input_reconstruction_lr accumulates gradient in hidden_gradient, and updates reconstruction_weights and input_reconstruction_bias Also computes neg log cost and returns it.
double fpropInputReconstructionFromHidden (Vec hidden, Mat reconstruction_weights, Vec &input_reconstruction_bias, Vec &input_reconstruction_prob, Vec clean_input)
 Builds input_reconstruction_prob from hidden (using reconstruction_weights which is nhidden x ninputs, and input_reconstruction_bias) Also computes neg log cost and returns it.
void updateInputReconstructionFromHidden (Vec hidden, Mat &reconstruction_weights, Mat &acc_weights_gr, Vec &input_reconstruction_bias, Vec input_reconstruction_prob, Vec clean_input, Vec hidden_gradient, double input_reconstruction_cost_weight, double lr)
 Backpropagates reconstruction cost (after comparison with clean_input) with learning rate input_reconstruction_lr accumulates gradient in hidden_gradient, and updates reconstruction_weights and input_reconstruction_bias.
double fpropHiddenReconstructionFromLastHidden2 (Vec theInput, Vec hidden, Mat reconstruction_weights, Mat &acc_weights_gr, Vec &reconstruction_bias, Vec &reconstruction_bias2, Vec hidden_reconstruction_activation_grad, Vec &reconstruction_prob, Vec clean_input, Vec hidden_gradient, double hidden_reconstruction_cost_weight, double lr)
double fpropHiddenReconstructionFromLastHidden (Vec theInput, Vec hidden, Mat reconstruction_weights, Mat &acc_weights_gr, Vec &reconstruction_bias, Vec &reconstruction_bias2, Vec hidden_reconstruction_activation_grad, Vec &reconstruction_prob, Vec clean_input, Vec hidden_gradient, double hidden_reconstruction_cost_weight, double lr)
double fpropHiddenSymmetricDynamicMatrix (Vec hidden, Mat reconstruction_weights, Vec &reconstruction_prob, Vec clean_input, Vec hidden_gradient, double hidden_reconstruction_cost_weight, double lr)

Detailed Description

Model made of RBMs linked through time.

Definition at line 61 of file DenoisingRecurrentNet.h.


Member Typedef Documentation

Reimplemented from PLearn::PLearner.

Definition at line 63 of file DenoisingRecurrentNet.h.


Constructor & Destructor Documentation

PLearn::DenoisingRecurrentNet::DenoisingRecurrentNet ( )

Member Function Documentation

string PLearn::DenoisingRecurrentNet::_classname_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 71 of file DenoisingRecurrentNet.cc.

OptionList & PLearn::DenoisingRecurrentNet::_getOptionList_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 71 of file DenoisingRecurrentNet.cc.

RemoteMethodMap & PLearn::DenoisingRecurrentNet::_getRemoteMethodMap_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 71 of file DenoisingRecurrentNet.cc.

bool PLearn::DenoisingRecurrentNet::_isa_ ( const Object o) [static]

Reimplemented from PLearn::PLearner.

Definition at line 71 of file DenoisingRecurrentNet.cc.

Object * PLearn::DenoisingRecurrentNet::_new_instance_for_typemap_ ( ) [static]

Reimplemented from PLearn::Object.

Definition at line 71 of file DenoisingRecurrentNet.cc.

StaticInitializer DenoisingRecurrentNet::_static_initializer_ & PLearn::DenoisingRecurrentNet::_static_initialize_ ( ) [static]

Reimplemented from PLearn::PLearner.

Definition at line 71 of file DenoisingRecurrentNet.cc.

void PLearn::DenoisingRecurrentNet::applyMultipleSoftmaxToInputWindow ( Vec  input_reconstruction_activation,
Vec  input_reconstruction_prob 
) [private]

Definition at line 1191 of file DenoisingRecurrentNet.cc.

References input_window_size, PLearn::TVec< T >::length(), PLERROR, PLearn::TVec< T >::subVec(), target_layers, and target_prediction_list.

Referenced by fpropInputReconstructionFromHidden().

{
    if(target_layers.length()!=1)
        PLERROR("applyMultipleSoftmaxToInputWindow was thought to work with a single target layer which is a RBMMixedLayer combining differnet multinomial costs");

    // int nelems = target_layers[0]->size();
    int nelems = target_prediction_list[0].width();

    if(input_reconstruction_activation.length() != input_window_size*nelems)
        
        PLERROR("Problem: input_reconstruction_activation.length() != input_window_size*nelems  (%d != %d * %d)",input_reconstruction_activation.length(),input_window_size,nelems);

    for(int k=0; k<input_window_size; k++)
    {
        Vec activation_window = input_reconstruction_activation.subVec(k*nelems, nelems);
        Vec prob_window = input_reconstruction_prob.subVec(k*nelems, nelems);
        target_layers[0]->fprop(activation_window, prob_window);
    }    
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::applyWeightPenalty ( Mat weights,
Mat acc_weights_gr,
int down_size,
int up_size,
real lr 
) [private]

Definition at line 1334 of file DenoisingRecurrentNet.cc.

References i, j, L1_penalty_factor, and L2_penalty_factor.

Referenced by bpropUpdateConnection().

{
    // Apply penalty (decay) on weights.
    real delta_L1 = lr * L1_penalty_factor;
    real delta_L2 = lr * L2_penalty_factor;
    /*if (L2_decrease_type == "one_over_t")
        delta_L2 /= (1 + L2_decrease_constant * L2_n_updates);
    else if (L2_decrease_type == "sigmoid_like")
        delta_L2 *= sigmoid((L2_shift - L2_n_updates) * L2_decrease_constant);
    else
        PLERROR("In RBMMatrixConnection::applyWeightPenalty - Invalid value "
                "for L2_decrease_type: %s", L2_decrease_type.c_str());
    */
    for( int i=0; i<up_size; i++)
    {
        real* w_ = weights[i];
        real* a_w_g = acc_weights_gr[i];
        for( int j=0; j<down_size; j++ )
        {
            if( delta_L2 != 0. ){
                //w_[j] *= (1 - delta_L2);
                a_w_g[j] -= w_[j]*delta_L2;
            }

            if( delta_L1 != 0. )
            {
                if( w_[j] > delta_L1 )
                    a_w_g[j] -= delta_L1;
                else if( w_[j] < -delta_L1 )
                    a_w_g[j] += delta_L1;
                else
                    a_w_g[j] = 0.;
            }
        }
    }
    /*if (delta_L2 > 0)
      L2_n_updates++;*/
}

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::bpropUpdateConnection ( const Vec input,
const Vec output,
Vec input_gradient,
const Vec output_gradient,
Mat weights,
Mat acc_weights_gr,
int down_size,
int up_size,
real lr,
bool  accumulate,
bool  using_penalty_factor 
) [private]

Definition at line 1259 of file DenoisingRecurrentNet.cc.

References applyWeightPenalty(), PLearn::externalProductScaleAcc(), PLearn::fast_exact_is_equal(), L1_penalty_factor, L2_penalty_factor, PLASSERT, PLASSERT_MSG, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), PLearn::transposeProduct(), and PLearn::transposeProductAcc().

Referenced by recurrentUpdate().

{
    PLASSERT( input.size() == down_size );
    PLASSERT( output.size() == up_size );
    PLASSERT( output_gradient.size() == up_size );

    if( accumulate )
    {
        PLASSERT_MSG( input_gradient.size() == down_size,
                      "Cannot resize input_gradient AND accumulate into it" );

        // input_gradient += weights' * output_gradient
        transposeProductAcc( input_gradient, weights, output_gradient );
    }
    else
    {
        input_gradient.resize( down_size );

        // input_gradient = weights' * output_gradient
        transposeProduct( input_gradient, weights, output_gradient );
    }

    // weights -= learning_rate * output_gradient * input'
    //externalProductScaleAcc( weights, output_gradient, input, -lr );
    externalProductScaleAcc( acc_weights_gr, output_gradient, input, -lr );
    
    if((!fast_exact_is_equal(L1_penalty_factor,0) || !fast_exact_is_equal(L2_penalty_factor,0)) && using_penalty_factor)
        applyWeightPenalty(weights, acc_weights_gr, down_size, up_size, lr);
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::bpropUpdateHiddenLayer ( const Vec input,
const Vec output,
Vec input_gradient,
const Vec output_gradient,
Vec bias,
real lr 
) [private]

Definition at line 1299 of file DenoisingRecurrentNet.cc.

References PLearn::TVec< T >::clear(), i, PLearn::TVec< T >::length(), PLASSERT, PLearn::TVec< T >::resize(), and PLearn::TVec< T >::size().

Referenced by recurrentUpdate().

{

    int size = bias.length();

    PLASSERT( input.size() == size );
    PLASSERT( output.size() == size );
    PLASSERT( output_gradient.size() == size );

    
    input_gradient.resize( size );
    input_gradient.clear();
    
    
    for( int i=0 ; i<size ; i++ )
    {
        real output_i = output[i];
        real in_grad_i;
        in_grad_i = output_i * (1-output_i) * output_gradient[i];
        input_gradient[i] += in_grad_i;
        
       
        // update the bias: bias -= learning_rate * input_gradient
        bias[i] -= lr * in_grad_i;
        
    }
    
    //applyBiasDecay();
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::build ( ) [virtual]

Finish building the object; just call inherited::build followed by build_()

Reimplemented from PLearn::PLearner.

Definition at line 504 of file DenoisingRecurrentNet.cc.

References PLearn::PLearner::build(), and build_().

Here is the call graph for this function:

void PLearn::DenoisingRecurrentNet::build_ ( ) [private]

This does the actual building.

Reimplemented from PLearn::PLearner.

Definition at line 302 of file DenoisingRecurrentNet.cc.

References PLearn::TVec< T >::clear(), dynamic_connections, dynamic_reconstruction_connections, encoding, PLearn::endl(), hidden_connections, hidden_layer, hidden_layer2, i, input_connections, input_layer, input_symbol_sizes, PLearn::PLearner::inputsize(), PLearn::TVec< T >::length(), PLASSERT, PLERROR, PLearn::TVec< T >::push_back(), PLearn::PLearner::random_gen, PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), target_connections, target_layers, target_layers_n_of_target_elements, target_layers_weights, target_symbol_sizes, PLearn::PLearner::targetsize(), PLearn::PLearner::train_set, and use_target_layers_masks.

Referenced by build().

{
    // ### This method should do the real building of the object,
    // ### according to set 'options', in *any* situation.
    // ### Typical situations include:
    // ###  - Initial building of an object from a few user-specified options
    // ###  - Building of a "reloaded" object: i.e. from the complete set of
    // ###    all serialised options.
    // ###  - Updating or "re-building" of an object after a few "tuning"
    // ###    options have been modified.
    // ### You should assume that the parent class' build_() has already been
    // ### called.

    MODULE_LOG << "build_() called" << endl;

    if(train_set)
    {
        use_target_layers_masks = (encoding=="raw_masked_supervised");

        PLASSERT( target_layers_weights.length() == target_layers.length() );
        PLASSERT( target_connections.length() == target_layers.length() );
        PLASSERT( target_layers.length() > 0 );
        PLASSERT( input_layer );
        PLASSERT( hidden_layer );
        PLASSERT( input_connections );

        // Parsing symbols in input
        int input_layer_size = 0;
        input_symbol_sizes.resize(0);
        PP<Dictionary> dict;
        int inputsize_without_masks = inputsize() 
            - ( use_target_layers_masks ? targetsize() : 0 );
        for(int i=0; i<inputsize_without_masks; i++)
        {
            dict = train_set->getDictionary(i);
            if(dict)
            {
                if( dict->size() == 0 )
                    PLERROR("DenoisingRecurrentNet::build_(): dictionary "
                        "of field %d is empty", i);
                input_symbol_sizes.push_back(dict->size());
                // Adjust size to include one-hot vector
                input_layer_size += dict->size();
            }
            else
            {
                input_symbol_sizes.push_back(-1);
                input_layer_size++;
            }
        }
/*
        if( input_layer->size != input_layer_size )
            PLERROR("DenoisingRecurrentNet::build_(): input_layer->size %d "
                    "should be %d", input_layer->size, input_layer_size);
*/
        // Parsing symbols in target
        int tar_layer = 0;
        int tar_layer_size = 0;
        target_symbol_sizes.resize(target_layers.length());
        for( tar_layer=0; tar_layer<target_layers.length(); tar_layer++ )
            target_symbol_sizes[tar_layer].resize(0);

        target_layers_n_of_target_elements.resize( targetsize() );
        target_layers_n_of_target_elements.clear();
        tar_layer = 0;
        for( int tar=0; tar<targetsize(); tar++)
        {
            if( tar_layer > target_layers.length() )
                PLERROR("DenoisingRecurrentNet::build_(): target layers "
                        "does not cover all targets.");            

            dict = train_set->getDictionary(tar+inputsize());
            if(dict)
            {
                if( use_target_layers_masks )
                    PLERROR("DenoisingRecurrentNet::build_(): masks for "
                            "symbolic targets is not implemented.");
                if( dict->size() == 0 )
                    PLERROR("DenoisingRecurrentNet::build_(): dictionary "
                            "of field %d is empty", tar);

                target_symbol_sizes[tar_layer].push_back(dict->size());
                target_layers_n_of_target_elements[tar_layer]++;
                tar_layer_size += dict->size();
            }
            else
            {
                target_symbol_sizes[tar_layer].push_back(-1);
                target_layers_n_of_target_elements[tar_layer]++;
                tar_layer_size++;
            }

            if( target_layers[tar_layer]->size == tar_layer_size )
            {
                tar_layer++;
                tar_layer_size = 0;
            }
        }

        //if( tar_layer != target_layers.length() )
        //    PLERROR("DenoisingRecurrentNet::build_(): target layers "
        //            "does not cover all targets.");


        // Building weights and layers
        if( !input_layer->random_gen )
        {
            input_layer->random_gen = random_gen;
            input_layer->forget();
        }

        if( !hidden_layer->random_gen )
        {
            hidden_layer->random_gen = random_gen;
            hidden_layer->forget();
        }

        input_connections->down_size = input_layer->size;
        input_connections->up_size = hidden_layer->size;
        if( !input_connections->random_gen )
        {
            input_connections->random_gen = random_gen;
            input_connections->forget();
        }
        input_connections->build();


        if( dynamic_connections )
        {
            dynamic_connections->down_size = hidden_layer->size;
            dynamic_connections->up_size = hidden_layer->size;
            if( !dynamic_connections->random_gen )
            {
                dynamic_connections->random_gen = random_gen;
                dynamic_connections->forget();
            }
            dynamic_connections->build();
        }

        if( dynamic_reconstruction_connections )
        {

            dynamic_reconstruction_connections->down_size = hidden_layer->size;
            dynamic_reconstruction_connections->up_size = hidden_layer->size;
            if( !dynamic_reconstruction_connections->random_gen )
            {
                dynamic_reconstruction_connections->random_gen = random_gen;
                dynamic_reconstruction_connections->forget();
            }
            dynamic_reconstruction_connections->build();
            
        }

        if( hidden_layer2 )
        {
            if( !hidden_layer2->random_gen )
            {
                hidden_layer2->random_gen = random_gen;
                hidden_layer2->forget();
            }

            PLASSERT( hidden_connections );

            hidden_connections->down_size = hidden_layer->size;
            hidden_connections->up_size = hidden_layer2->size;
            if( !hidden_connections->random_gen )
            {
                hidden_connections->random_gen = random_gen;
                hidden_connections->forget();
            }
            hidden_connections->build();
        }

        for( int tar_layer = 0; tar_layer < target_layers.length(); tar_layer++ )
        {
            PLASSERT( target_layers[tar_layer] );
            PLASSERT( target_connections[tar_layer] );

            if( !target_layers[tar_layer]->random_gen )
            {
                target_layers[tar_layer]->random_gen = random_gen;
                target_layers[tar_layer]->forget();
            }

            if( hidden_layer2 )
                target_connections[tar_layer]->down_size = hidden_layer2->size;
            else
                target_connections[tar_layer]->down_size = hidden_layer->size;

            target_connections[tar_layer]->up_size = target_layers[tar_layer]->size;
            if( !target_connections[tar_layer]->random_gen )
            {
                target_connections[tar_layer]->random_gen = random_gen;
                target_connections[tar_layer]->forget();
            }
            target_connections[tar_layer]->build();
        }

    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::clamp_units ( const Vec  layer_vector,
PP< RBMLayer layer,
TVec< int symbol_sizes,
const Vec  original_mask,
Vec formated_mask 
) const

Clamps the layer units based on a layer vector and provides the associated mask in the correct format.

Definition at line 2519 of file DenoisingRecurrentNet.cc.

References PLearn::TVec< T >::fill(), i, PLearn::TVec< T >::length(), PLASSERT, PLearn::TVec< T >::resize(), and PLearn::TVec< T >::subVec().

{
    int it = 0;
    int ss = -1;
    PLASSERT( original_mask.length() == layer_vector.length() );
    formated_mask.resize(layer->size);
    for(int i=0; i<layer_vector.length(); i++)
    {
        ss = symbol_sizes[i];
        // If input is a real ...
        if(ss < 0) 
        {
            formated_mask[it] = original_mask[i];
            layer->expectation[it++] = layer_vector[i];
        }
        else // ... or a symbol
        {
            // Convert to one-hot vector
            layer->expectation.subVec(it,ss).clear();
            formated_mask.subVec(it,ss).fill(original_mask[i]);
            layer->expectation[it+(int)layer_vector[i]] = 1;
            it += ss;
        }
    }
    layer->setExpectation( layer->expectation );
}

Here is the call graph for this function:

void PLearn::DenoisingRecurrentNet::clamp_units ( const Vec  layer_vector,
PP< RBMLayer layer,
TVec< int symbol_sizes 
) const

Clamps the layer units based on a layer vector.

Definition at line 2494 of file DenoisingRecurrentNet.cc.

References i, and PLearn::TVec< T >::length().

Referenced by generate(), and generateArtificial().

{
    int it = 0;
    int ss = -1;
    for(int i=0; i<layer_vector.length(); i++)
    {
        ss = symbol_sizes[i];
        // If input is a real ...
        if(ss < 0) 
        {
            layer->expectation[it++] = layer_vector[i];
        }
        else // ... or a symbol
        {
            // Convert to one-hot vector
            layer->expectation.subVec(it,ss).clear();
            layer->expectation[it+(int)layer_vector[i]] = 1;
            it += ss;
        }
    }
    layer->setExpectation( layer->expectation );
}

Here is the call graph for this function:

Here is the caller graph for this function:

string PLearn::DenoisingRecurrentNet::classname ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 71 of file DenoisingRecurrentNet.cc.

Referenced by train().

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::computeCostsFromOutputs ( const Vec input,
const Vec output,
const Vec target,
Vec costs 
) const [virtual]

Computes the costs from already computed output.

Implements PLearn::PLearner.

Definition at line 2585 of file DenoisingRecurrentNet.cc.

References PLERROR.

{
    PLERROR("DenoisingRecurrentNet::computeCostsFromOutputs(): this is a "
            "dynamic, generative model, that can only compute negative "
            "log-likelihooh costs for a whole VMat");
}
void PLearn::DenoisingRecurrentNet::computeOutput ( const Vec input,
Vec output 
) const [virtual]

Computes the output from the input.

Reimplemented from PLearn::PLearner.

Definition at line 2578 of file DenoisingRecurrentNet.cc.

References PLERROR.

{
    PLERROR("DenoisingRecurrentNet::computeOutput(): this is a dynamic, "
            "generative model, that can only compute negative log-likelihood "
            "costs for a whole VMat");
}
void PLearn::DenoisingRecurrentNet::declareOptions ( OptionList ol) [static, protected]

Declares the class options.

Reimplemented from PLearn::PLearner.

Definition at line 100 of file DenoisingRecurrentNet.cc.

References PLearn::OptionBase::buildoption, PLearn::declareOption(), PLearn::PLearner::declareOptions(), dynamic_connections, dynamic_gradient_scale_factor, dynamic_reconstruction_connections, encoding, end_of_sequence_symbol, hidden_connections, hidden_layer, hidden_layer2, hidden_noise_prob, hidden_reconstruction_cost_weight, hidden_reconstruction_lr, input_connections, input_layer, input_noise_prob, input_reconstruction_cost_weight, input_reconstruction_lr, input_symbol_sizes, input_window_size, L1_penalty_factor, L2_penalty_factor, PLearn::OptionBase::learntoption, mean_encoded_vec, nb_stage_reconstruction, nb_stage_target, noisy_recurrent_lr, prediction_cost_weight, recurrent_lr, target_connections, target_layers, target_layers_n_of_target_elements, target_layers_weights, target_symbol_sizes, tied_hidden_reconstruction_weights, and tied_input_reconstruction_weights.

{
//    declareOption(ol, "rbm_learning_rate", &DenoisingRecurrentNet::rbm_learning_rate,
//                  OptionBase::buildoption,
//                  "The learning rate used during RBM contrastive "
//                  "divergence learning phase.\n");

//    declareOption(ol, "rbm_nstages", &DenoisingRecurrentNet::rbm_nstages,
//                  OptionBase::buildoption,
//                  "Number of epochs for rbm phase.\n");


    declareOption(ol, "target_layers_weights", 
                  &DenoisingRecurrentNet::target_layers_weights,
                  OptionBase::buildoption,
                  "The training weights of each target layers.\n");

    declareOption(ol, "end_of_sequence_symbol", 
                  &DenoisingRecurrentNet::end_of_sequence_symbol,
                  OptionBase::buildoption,
                  "Value of the first input component for end-of-sequence "
                  "delimiter.\n");

    // TO DO: input_layer is to be removed eventually because only its size is really used
    declareOption(ol, "input_layer", &DenoisingRecurrentNet::input_layer,
                  OptionBase::buildoption,
                  "The input layer of the model.\n");

    declareOption(ol, "target_layers", &DenoisingRecurrentNet::target_layers,
                  OptionBase::buildoption,
                  "The target layers of the model.\n");

    declareOption(ol, "hidden_layer", &DenoisingRecurrentNet::hidden_layer,
                  OptionBase::buildoption,
                  "The hidden layer of the model.\n");

    declareOption(ol, "hidden_layer2", &DenoisingRecurrentNet::hidden_layer2,
                  OptionBase::buildoption,
                  "The second hidden layer of the model (optional).\n");

    declareOption(ol, "dynamic_connections", 
                  &DenoisingRecurrentNet::dynamic_connections,
                  OptionBase::buildoption,
                  "The RBMConnection between the first hidden layers, "
                  "through time (optional).\n");

    declareOption(ol, "dynamic_reconstruction_connections", 
                  &DenoisingRecurrentNet::dynamic_reconstruction_connections,
                  OptionBase::buildoption,
                  "The RBMConnection for the reconstruction between the hidden layers, "
                  "through time (optional).\n");

    declareOption(ol, "hidden_connections", 
                  &DenoisingRecurrentNet::hidden_connections,
                  OptionBase::buildoption,
                  "The RBMConnection between the first and second "
                  "hidden layers (optional).\n");

    declareOption(ol, "input_connections", 
                  &DenoisingRecurrentNet::input_connections,
                  OptionBase::buildoption,
                  "The RBMConnection from input_layer to hidden_layer.\n");

    declareOption(ol, "target_connections", 
                  &DenoisingRecurrentNet::target_connections,
                  OptionBase::buildoption,
                  "The RBMConnection from input_layer to hidden_layer.\n");

    declareOption(ol, "target_layers_n_of_target_elements", 
                  &DenoisingRecurrentNet::target_layers_n_of_target_elements,
                  OptionBase::learntoption,
                  "Number of elements in the target part of a VMatrix associated\n"
                  "to each target layer.\n");

    declareOption(ol, "input_symbol_sizes", 
                  &DenoisingRecurrentNet::input_symbol_sizes,
                  OptionBase::learntoption,
                  "Number of symbols for each symbolic field of train_set.\n");

    declareOption(ol, "target_symbol_sizes", 
                  &DenoisingRecurrentNet::target_symbol_sizes,
                  OptionBase::learntoption,
                  "Number of symbols for each symbolic field of train_set.\n");





    
    declareOption(ol, "encoding", 
                  &DenoisingRecurrentNet::encoding,
                  OptionBase::buildoption,
                  "Chooses what type of encoding to apply to an input sequence\n"
                  "Possibilities: timeframe, note_duration, note_octav_duration, raw_masked_supervised");

    declareOption(ol, "input_window_size", 
                  &DenoisingRecurrentNet::input_window_size,
                  OptionBase::buildoption,
                  "How many time steps to present as input\n"
                  "If it's 0, then all layers are essentially ignored, and instead an unconditional predictor is trained\n"
                  "This option is ignored when mode is raw_masked_supervised,"
                  "since in this mode the full expanded and preprocessed input and target are given explicitly."
        );

    declareOption(ol, "tied_input_reconstruction_weights", 
                  &DenoisingRecurrentNet::tied_input_reconstruction_weights,
                  OptionBase::buildoption,
                  "Do we want the input reconstruction weights tied or not\n"
                  "Boolean, yes or no");

    declareOption(ol, "input_noise_prob", 
                  &DenoisingRecurrentNet::input_noise_prob,
                  OptionBase::buildoption,
                  "Probability, for each neurone of each input, to be set to zero\n");

    declareOption(ol, "input_reconstruction_lr", 
                  &DenoisingRecurrentNet::input_reconstruction_lr,
                  OptionBase::buildoption,
                  "The learning rate used for the reconstruction\n");

    declareOption(ol, "hidden_noise_prob", 
                  &DenoisingRecurrentNet::hidden_noise_prob,
                  OptionBase::buildoption,
                  "Probability, for each neurone of each hidden layer, to be set to zero\n");

    declareOption(ol, "hidden_reconstruction_lr", 
                  &DenoisingRecurrentNet::hidden_reconstruction_lr,
                  OptionBase::buildoption,
                  "The learning rate used for the dynamic reconstruction through time\n");

    declareOption(ol, "tied_hidden_reconstruction_weights", 
                  &DenoisingRecurrentNet::tied_hidden_reconstruction_weights,
                  OptionBase::buildoption,
                  "Do we want the dynamic reconstruction weights tied or not\n"
                  "Boolean, yes or no");

    declareOption(ol, "noisy_recurrent_lr", 
                  &DenoisingRecurrentNet::noisy_recurrent_lr,
                  OptionBase::buildoption,
                  "The learning rate used in the noisy recurrent phase for the input reconstruction\n");

    declareOption(ol, "dynamic_gradient_scale_factor", 
                  &DenoisingRecurrentNet::dynamic_gradient_scale_factor,
                  OptionBase::buildoption,
                  "The scale factor of the learning rate used in the noisy recurrent phase for the dynamic hidden reconstruction\n");

    declareOption(ol, "recurrent_lr", 
                  &DenoisingRecurrentNet::recurrent_lr,
                  OptionBase::buildoption,
                  "The learning rate used in the fine tuning phase\n");

    declareOption(ol, "mean_encoded_vec", &DenoisingRecurrentNet::mean_encoded_vec,
                  OptionBase::learntoption,
                  "When training with trainUnconditionalPredictor (if input_window_size==0), this is simply used to store the the avg encoded frame");

    declareOption(ol, "prediction_cost_weight", &DenoisingRecurrentNet::prediction_cost_weight,
                  OptionBase::learntoption,
                  "The training weight for the target prediction");

    declareOption(ol, "input_reconstruction_cost_weight", &DenoisingRecurrentNet::input_reconstruction_cost_weight,
                  OptionBase::learntoption,
                  "The training weight for the input reconstruction");

    declareOption(ol, "hidden_reconstruction_cost_weight", &DenoisingRecurrentNet::hidden_reconstruction_cost_weight,
                  OptionBase::learntoption,
                  "The training weight for the hidden reconstruction");

    declareOption(ol, "nb_stage_reconstruction", &DenoisingRecurrentNet::nb_stage_reconstruction,
                  OptionBase::learntoption,
                  "The nomber of stage for de reconstructions");

    declareOption(ol, "nb_stage_target", &DenoisingRecurrentNet::nb_stage_target,
                  OptionBase::learntoption,
                  "The nomber of stage for de target");

    declareOption(ol, "L1_penalty_factor",
                  &DenoisingRecurrentNet::L1_penalty_factor,
                  OptionBase::buildoption,
                  "Optional (default=0) factor of L1 regularization term, i.e.\n"
                  "minimize L1_penalty_factor * sum_{ij} |weights(i,j)| "
                  "during training.\n");

    declareOption(ol, "L2_penalty_factor",
                  &DenoisingRecurrentNet::L2_penalty_factor,
                  OptionBase::buildoption,
                  "Optional (default=0) factor of L2 regularization term, i.e.\n"
                  "minimize 0.5 * L2_penalty_factor * sum_{ij} weights(i,j)^2 "
                  "during training.\n");
                  
                  
                  

 /*
    declareOption(ol, "", &DenoisingRecurrentNet::,
                  OptionBase::learntoption,
                  "");
     */

    // Now call the parent class' declareOptions
    inherited::declareOptions(ol);
}

Here is the call graph for this function:

static const PPath& PLearn::DenoisingRecurrentNet::declaringFile ( ) [inline, static]

Reimplemented from PLearn::PLearner.

Definition at line 328 of file DenoisingRecurrentNet.h.

:
    //#####  Not Options  #####################################################
DenoisingRecurrentNet * PLearn::DenoisingRecurrentNet::deepCopy ( CopiesMap copies) const [virtual]

Reimplemented from PLearn::PLearner.

Definition at line 71 of file DenoisingRecurrentNet.cc.

int PLearn::DenoisingRecurrentNet::duration_to_number_of_timeframes ( int  duration) [static]

Definition at line 2415 of file DenoisingRecurrentNet.cc.

References PLERROR.

Referenced by encode_onehot_timeframe().

{
    PLERROR("duration_to_number_of_timeframes (used only when encoding==timeframe) is not yet implemented");
    return duration+1;
}

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::encode_artificialData ( Mat  seq) const [private]

Definition at line 1006 of file DenoisingRecurrentNet.cc.

References encoded_seq, i, input_list, PLearn::PLearner::inputsize(), PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), resize_lists(), PLearn::TVec< T >::size(), PLearn::TMat< T >::subMatColumns(), target_layers, targets_list, PLearn::PLearner::targetsize(), and PLearn::TMat< T >::width().

Referenced by encodeSequenceAndPopulateLists().

{
    int l = seq.length();
    int theInputsize = inputsize();
    int theTargetsize = targetsize();
    resize_lists(l);
    //int inputsize_without_masks = inputsize-targetsize;
    Mat input_part;
    input_part.resize(seq.length(),theInputsize);
    input_part << seq.subMatColumns(0,theInputsize);
    //Mat mask_part = seq.subMatColumns(inputsize, targetsize);
    Mat target_part = seq.subMatColumns(theInputsize, theTargetsize);

    //if(doNoise)
    //    inject_zero_forcing_noise(input_part, input_noise_prob);

    for(int i=0; i<l; i++)
        input_list[i] = input_part(i);

    int ntargets = target_layers.length();
    targets_list.resize(ntargets);
    //masks_list.resize(ntargets);
    int startcol = 0; // starting column of next target in target_part and mask_part
    for(int k=0; k<ntargets; k++)
    {
        int targsize = target_layers[k]->size;
        targets_list[k] = target_part.subMatColumns(startcol, targsize);
        //masks_list[k] = mask_part.subMatColumns(startcol, targsize);
        startcol += targsize;
    }

    encoded_seq.resize(input_part.length(), input_part.width());
    encoded_seq << input_part;


    /*int l = sequence.length();
 
    // reserve one extra bit to mean repetition
    encoded_sequence.resize(l, 1);
    encoded_sequence.clear();

    for(int i=0; i<l; i++)
    {
        int number = int(sequence(i,0));
        encoded_sequence(i,0) = number;        
        }    */
}    

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::encode_onehot_diffNote_duration ( Mat  sequence,
Mat encoded_sequence,
bool  use_silence,
int  duration_nbits = 20 
) [static]

Definition at line 2330 of file DenoisingRecurrentNet.cc.

References PLearn::TMat< T >::clear(), getDurationBit(), i, PLearn::TMat< T >::length(), PLERROR, and PLearn::TMat< T >::resize().

Referenced by encodeSequence().

{
    int l = sequence.length();
    //diff paussible -21 ... -1 0 1 ... 21
    // index          0     20 21 22    43
    int note_nbits = 43; //de -21 a 21

    encoded_sequence.resize(l,note_nbits+duration_nbits);
    encoded_sequence.clear();
    
    
    for(int i=0; i<l; i++)
    {
        //int midi_number = int(sequence(i,0));

        if(i==0) // silence
        {
            encoded_sequence(i,21) = 1;
        }
        else{
            int diffNote = int(sequence(i,0))-int(sequence(i-1,0))+21;
            encoded_sequence(i,diffNote) = 1;
        }

       
        int duration_bit = getDurationBit(int(sequence(i,1)));
        if(duration_bit<0 || duration_bit>=duration_nbits)
            PLERROR("duration_bit out of valid range");
        encoded_sequence(i,note_nbits+duration_bit) = 1;
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::encode_onehot_note_octav_duration ( Mat  sequence,
Mat encoded_sequence,
int  prepend_zero_rows,
bool  use_silence,
int  octav_nbits,
int  duration_nbits = 20 
) [static]

Definition at line 2363 of file DenoisingRecurrentNet.cc.

References PLearn::TMat< T >::clear(), getDurationBit(), i, PLearn::TMat< T >::length(), PLERROR, and PLearn::TMat< T >::resize().

Referenced by encodeSequence().

{
    int l = sequence.length();
    int note_nbits = use_silence ?13 :12;

    encoded_sequence.resize(prepend_zero_rows+l,note_nbits+octav_nbits+duration_nbits);
    encoded_sequence.clear();
    int octav_min = 10000;
    int octav_max = -10000;

    if(octav_nbits>0)
    {
        for(int i=0; i<l; i++)
        {
            int midi_number = int(sequence(i,0));
            int octav = midi_number/12;
            if(octav<octav_min)
                octav_min = octav;
            if(octav>octav_max)
                octav_max = octav;
        }
        if(octav_max-octav_min > octav_nbits)
            PLERROR("Octav range too big. Does not fit in octav_nbits");
    }

    
    for(int i=0; i<l; i++)
    {
        int midi_number = int(sequence(i,0));
        if(midi_number==0) // silence
        {
            if(use_silence)
                encoded_sequence(prepend_zero_rows+i,12) = 1;
        }
        else
            encoded_sequence(prepend_zero_rows+i,midi_number%12) = 1;

        if(octav_nbits>0)
        {
            int octavpos = midi_number/12-octav_min;
            encoded_sequence(prepend_zero_rows+i,note_nbits+octavpos) = 1;
        }

        int duration_bit = getDurationBit(int(sequence(i,1)));
        if(duration_bit<0 || duration_bit>=duration_nbits)
            PLERROR("duration_bit out of valid range");
        encoded_sequence(prepend_zero_rows+i,note_nbits+octav_nbits+duration_bit) = 1;
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::encode_onehot_timeframe ( Mat  sequence,
Mat encoded_sequence,
int  prepend_zero_rows,
bool  use_silence = false 
) [static]

Definition at line 2426 of file DenoisingRecurrentNet.cc.

References PLearn::TMat< T >::clear(), duration_to_number_of_timeframes(), i, PLearn::TMat< T >::length(), and PLearn::TMat< T >::resize().

Referenced by encodeSequence().

{
    int l = sequence.length();
    int newl = 0;

    // First compute length of timeframe sequence
    for(int i=0; i<l; i++)
    {
        int duration = int(sequence(i,1));
        newl += duration_to_number_of_timeframes(duration);
    }

    int nnotes = use_silence ?13 :12;

    // reserve one extra bit to mean repetition
    encoded_sequence.resize(prepend_zero_rows+newl, nnotes+1);
    encoded_sequence.clear();

    int k=prepend_zero_rows;
    for(int i=0; i<l; i++)
    {
        int midi_number = int(sequence(i,0));
        if(midi_number==0) // silence
        {
            if(use_silence)
                encoded_sequence(k++,12) = 1;
        }
        else
            encoded_sequence(k++,midi_number%12) = 1;

        int duration = int(sequence(i,1));
        int nframes = duration_to_number_of_timeframes(duration);
        while(--nframes>0) // setb repetition bit
            encoded_sequence(k++,nnotes) = 1;            
    }    
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::encodeAndCreateSupervisedSequence ( Mat  seq) const [private]

encodes seq, then populates: inputslist, targets_list, masks_list

Definition at line 938 of file DenoisingRecurrentNet.cc.

References encoded_seq, encodeSequence(), input_list, input_window_size, PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), PLERROR, PLearn::TVec< T >::resize(), resize_lists(), PLearn::TMat< T >::subMatRows(), target_layers, targets_list, PLearn::TMat< T >::toVec(), use_target_layers_masks, and PLearn::TMat< T >::width().

Referenced by encodeSequenceAndPopulateLists().

{
    if(use_target_layers_masks)
        PLERROR("Bug: use_target_layers_masks is expected to be false (no masks) when in encodeAndCreateSupervisedSequence");

    encodeSequence(seq, encoded_seq);
    // now work with encoded_seq
    int l = encoded_seq.length();
    resize_lists(l-input_window_size);


    int ntargets = target_layers.length();
    targets_list.resize(ntargets);
    //Mat targets = targets_list[0];
    //targets.resize(l, encoded_seq.width());
    targets_list[0].resize(l-input_window_size, encoded_seq.width());   
         
    for(int t=input_window_size; t<l; t++)
    {

        input_list[t-input_window_size] = encoded_seq.subMatRows(t-input_window_size,input_window_size).toVec();
        //perr << "t-input_window_size = " << endl;
        //perr << "subMat:" << endl << encoded_seq.subMatRows(t-input_window_size,input_window_size) << endl;
        //perr << "toVec:" << endl << encoded_seq.subMatRows(t-input_window_size,input_window_size).toVec() << endl;
        //perr << "input_list:" << endl << input_list[t-input_window_size] << endl;
        // target is copied so that when adding noise to input, it doesn't modify target 
        //targets(t-input_window_size) << encoded_seq(t);
        targets_list[0](t-input_window_size) << encoded_seq(t);
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::encodeAndCreateSupervisedSequence2 ( Mat  seq) const [private]

Definition at line 901 of file DenoisingRecurrentNet.cc.

References encoded_seq, encodeSequence(), input_list, input_window_size, PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), PLERROR, PLearn::TVec< T >::resize(), resize_lists(), PLearn::TVec< T >::size(), PLearn::TMat< T >::subMatRows(), target_layers, targets_list, PLearn::TMat< T >::toVec(), and use_target_layers_masks.

Referenced by encodeSequenceAndPopulateLists().

{
     if(use_target_layers_masks)
        PLERROR("Bug: use_target_layers_masks is expected to be false (no masks) when in encodeAndCreateSupervisedSequence");

    encodeSequence(seq, encoded_seq);
    // now work with encoded_seq
    Vec tempoTar;
    int l = encoded_seq.length();
    resize_lists(l-input_window_size);


    int ntargets = target_layers.length();
    targets_list.resize(ntargets);
   
    for(int tar=0; tar<ntargets; tar++)
    {
        int targsize = target_layers[tar]->size;
    
        targets_list[tar].resize(l-input_window_size, targsize);   
    }  
    int startTar;
    for(int t=input_window_size; t<l; t++)
    {

        input_list[t-input_window_size] = encoded_seq.subMatRows(t-input_window_size,input_window_size).toVec();
        startTar = 43;
        for(int tar=0; tar<ntargets; tar++)
        {
            int targsize = target_layers[tar]->size;
            targets_list[tar](t-input_window_size) << encoded_seq(t).subVec(startTar,targsize);
            startTar += targsize;
        }
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::encodeSequence ( Mat  sequence,
Mat encoded_seq 
) const

encodes sequence according to specified encoding option (declared const because it needs to be called in test)

Possibilities: "timeframe", "note_duration", "note_octav_duration", "generic"

Definition at line 2248 of file DenoisingRecurrentNet.cc.

References encode_onehot_diffNote_duration(), encode_onehot_note_octav_duration(), encode_onehot_timeframe(), encoding, input_window_size, PLERROR, and PLearn::TMat< T >::resize().

Referenced by encodeAndCreateSupervisedSequence(), and encodeAndCreateSupervisedSequence2().

{
    int prepend_zero_rows = input_window_size;

    // reserve some minimum space for encoded_seq
    encoded_seq.resize(5000, 4);

    if(encoding=="timeframe")
        encode_onehot_timeframe(sequence, encoded_seq, prepend_zero_rows);
    else if(encoding=="note_duration")
        encode_onehot_note_octav_duration(sequence, encoded_seq, prepend_zero_rows, false, 0);
    else if(encoding=="note_octav_duration")
        encode_onehot_note_octav_duration(sequence, encoded_seq, prepend_zero_rows, false, 4);    
    else if(encoding=="diffNote_duration")
        encode_onehot_diffNote_duration(sequence, encoded_seq, false);
    else if(encoding=="raw_masked_supervised")
        PLERROR("raw_masked_supervised means already encoded! You shouldnt have landed here!!!");
    else if(encoding=="generic")
        PLERROR("generic means already encoded! You shouldnt have landed here!!!");
    else
        PLERROR("unsupported encoding: %s",encoding.c_str());
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::encodeSequenceAndPopulateLists ( Mat  seq,
bool  doNoise 
) const [private]

does encoding if needed and populates the list.

Definition at line 888 of file DenoisingRecurrentNet.cc.

References encode_artificialData(), encodeAndCreateSupervisedSequence(), encodeAndCreateSupervisedSequence2(), encoding, and splitRawMaskedSupervisedSequence().

Referenced by test(), train(), and trainUnconditionalPredictor().

{
    if(encoding=="raw_masked_supervised") // old already encoded format (for backward testing)
        splitRawMaskedSupervisedSequence(seq, doNoise);
    else if(encoding=="generic")
        encode_artificialData(seq);
    else if(encoding=="note_octav_duration")
        encodeAndCreateSupervisedSequence(seq);
    else if(encoding=="diffNote_duration")
        encodeAndCreateSupervisedSequence2(seq);
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::forget ( ) [virtual]

(Re-)initializes the PLearner in its fresh state (that state may depend on the 'seed' option) and sets 'stage' back to 0 (this is the stage of a fresh learner!).

Reimplemented from PLearn::PLearner.

Definition at line 582 of file DenoisingRecurrentNet.cc.

References PLearn::TVec< T >::clear(), dynamic_connections, dynamic_reconstruction_connections, PLearn::PLearner::forget(), hidden_connections, hidden_layer, hidden_layer2, i, input_connections, input_layer, input_reconstruction_bias, PLearn::TVec< T >::length(), PLearn::PLearner::stage, target_connections, and target_layers.

{
    inherited::forget();

    input_layer->forget();
    hidden_layer->forget();
    input_connections->forget();
    if( dynamic_connections )
        dynamic_connections->forget();
    if( dynamic_reconstruction_connections )
        dynamic_reconstruction_connections->forget();
    if( hidden_layer2 )
    {
        hidden_layer2->forget();
        hidden_connections->forget();
    }

    for( int i=0; i<target_layers.length(); i++ )
    {
        target_layers[i]->forget();
        target_connections[i]->forget();
    }

    input_reconstruction_bias.clear();

    stage = 0;
}

Here is the call graph for this function:

double PLearn::DenoisingRecurrentNet::fpropHiddenReconstructionFromLastHidden ( Vec  theInput,
Vec  hidden,
Mat  reconstruction_weights,
Mat acc_weights_gr,
Vec reconstruction_bias,
Vec reconstruction_bias2,
Vec  hidden_reconstruction_activation_grad,
Vec reconstruction_prob,
Vec  clean_input,
Vec  hidden_gradient,
double  hidden_reconstruction_cost_weight,
double  lr 
) [private]

Definition at line 1596 of file DenoisingRecurrentNet.cc.

References PLearn::TVec< T >::clear(), dynamic_act_no_bias_contribution, dynamic_connections, PLearn::externalProductScaleAcc(), PLearn::fastsigmoid(), hidden_layer, hidden_reconstruction_cost_weight, inject_zero_forcing_noise(), input_connections, input_noise_prob, j, PLearn::TVec< T >::length(), PLearn::multiplyAcc(), PLearn::productAcc(), PLearn::TVec< T >::resize(), PLearn::safelog(), PLearn::TVec< T >::size(), and PLearn::transposeProduct().

Referenced by recurrentUpdate().

{
    // set appropriate sizes
    int fullhiddenlength = hidden_target.length();
    Vec reconstruction_activation;
    Vec hidden_input_noise;
    Vec hidden_fprop_noise;
    Vec hidden_act_no_bias;
    Vec hidden_exp;
    Vec dynamic_act_no_bias_contribution;
    if(reconstruction_bias.length()==0)
    {
        reconstruction_bias.resize(fullhiddenlength);
        reconstruction_bias.clear();
    }
    if(reconstruction_bias2.length()==0)
    {
        reconstruction_bias2.resize(fullhiddenlength);
        reconstruction_bias2.clear();
    }
    reconstruction_activation.resize(fullhiddenlength);
    reconstruction_prob.resize(fullhiddenlength);

    hidden_fprop_noise.resize(fullhiddenlength);
    hidden_input_noise.resize(fullhiddenlength);
    hidden_act_no_bias.resize(fullhiddenlength);
    hidden_exp.resize(fullhiddenlength);
    dynamic_act_no_bias_contribution.resize(fullhiddenlength);

    input_connections->fprop( theInput, hidden_act_no_bias);
    hidden_input_noise << hidden_target;
    inject_zero_forcing_noise(hidden_input_noise, input_noise_prob);
    dynamic_connections->fprop(hidden_input_noise, dynamic_act_no_bias_contribution );
    hidden_act_no_bias += dynamic_act_no_bias_contribution;
    hidden_layer->fprop( hidden_act_no_bias, hidden_exp);
    //hidden_act_no_bias += reconstruction_bias2;
    //for( int j=0 ; j<fullhiddenlength ; j++ )
    //    hidden_fprop_noise[j] = fastsigmoid(hidden_act_no_bias[j] );

    // predict (denoised) input_reconstruction 
    transposeProduct(reconstruction_activation, reconstruction_weights, hidden_exp); //dynamic matrice tied
    //product(reconstruction_activation, reconstruction_weights, hidden_exp); //dynamic matrice not tied
    reconstruction_activation += reconstruction_bias;

    for( int j=0 ; j<fullhiddenlength ; j++ )
        reconstruction_prob[j] = fastsigmoid( reconstruction_activation[j] );

    //hidden_layer->fprop(reconstruction_activation, reconstruction_prob);

    /********************************************************************************/
    hidden_reconstruction_activation_grad.resize(reconstruction_prob.size());
    hidden_reconstruction_activation_grad << reconstruction_prob;
    hidden_reconstruction_activation_grad -= hidden_target;
    hidden_reconstruction_activation_grad *= hidden_reconstruction_cost_weight;
    

    productAcc(hidden_gradient, reconstruction_weights, hidden_reconstruction_activation_grad); //dynamic matrice tied
    //transposeProductAcc(hidden_gradient, reconstruction_weights, hidden_reconstruction_activation_grad); //dynamic matrice not tied
    
    //update bias
    multiplyAcc(reconstruction_bias, hidden_reconstruction_activation_grad, -lr);
    // update weight
    externalProductScaleAcc(acc_weights_gr, hidden, hidden_reconstruction_activation_grad, -lr); //dynamic matrice tied
    //externalProductScaleAcc(acc_weights_gr, hidden_reconstruction_activation_grad, hidden, -lr); //dynamic matrice not tied
                
    //update bias2
    //multiplyAcc(reconstruction_bias2, hidden_gradient, -lr);
    /********************************************************************************/
    // Vec hidden_reconstruction_activation_grad;
    /*hidden_reconstruction_activation_grad.clear();
    for(int k=0; k<reconstruction_prob.length(); k++){
        //    hidden_reconstruction_activation_grad[k] = safelog(1-reconstruction_prob[k]) - safelog(reconstruction_prob[k]);
        hidden_reconstruction_activation_grad[k] = - reconstruction_activation[k];
        }*/

    double result_cost = 0;
    double neg_log_cost = 0; // neg log softmax
    for(int k=0; k<reconstruction_prob.length(); k++){
        //if(hidden_target[k]!=0)
        neg_log_cost -= hidden_target[k]*safelog(reconstruction_prob[k]) + (1-hidden_target[k])*safelog(1-reconstruction_prob[k]);
    }
    result_cost = neg_log_cost;
    
    return result_cost;
}

Here is the call graph for this function:

Here is the caller graph for this function:

double PLearn::DenoisingRecurrentNet::fpropHiddenReconstructionFromLastHidden2 ( Vec  theInput,
Vec  hidden,
Mat  reconstruction_weights,
Mat acc_weights_gr,
Vec reconstruction_bias,
Vec reconstruction_bias2,
Vec  hidden_reconstruction_activation_grad,
Vec reconstruction_prob,
Vec  clean_input,
Vec  hidden_gradient,
double  hidden_reconstruction_cost_weight,
double  lr 
) [private]

Definition at line 1474 of file DenoisingRecurrentNet.cc.

References PLearn::TVec< T >::clear(), dynamic_act_no_bias_contribution, PLearn::externalProductScaleAcc(), PLearn::fastsigmoid(), hidden_reconstruction_cost_weight, i, j, PLearn::TVec< T >::length(), PLearn::multiplyAcc(), PLearn::productAcc(), PLearn::TVec< T >::resize(), PLearn::safelog(), PLearn::TVec< T >::size(), and PLearn::transposeProduct().

{
    // set appropriate sizes
    int fullhiddenlength = hidden_target.length();
    Vec reconstruction_activation;
    Vec reconstruction_activation2;
    Vec reconstruction_prob2;
    Vec hidden_act_no_bias;
    Vec hidden_exp;
    Vec dynamic_act_no_bias_contribution;
    Vec hidden_gradient2;
    if(reconstruction_bias.length()==0)
    {
        reconstruction_bias.resize(fullhiddenlength);
        reconstruction_bias.clear();
    }
    if(reconstruction_bias2.length()==0)
    {
        reconstruction_bias2.resize(fullhiddenlength);
        reconstruction_bias2.clear();
    }
    reconstruction_prob2.resize(fullhiddenlength);
    reconstruction_activation.resize(fullhiddenlength);
    reconstruction_activation2.resize(fullhiddenlength);
    reconstruction_prob.resize(fullhiddenlength);

   
    hidden_act_no_bias.resize(fullhiddenlength);
    hidden_exp.resize(fullhiddenlength);
    dynamic_act_no_bias_contribution.resize(fullhiddenlength);
    hidden_gradient2.resize(fullhiddenlength);
    

    // predict (denoised) input_reconstruction 
    transposeProduct(reconstruction_activation, reconstruction_weights, hidden); //dynamic matrice tied
    //product(reconstruction_activation, reconstruction_weights, hidden); //dynamic matrice not tied
    reconstruction_activation += reconstruction_bias;

    for( int j=0 ; j<fullhiddenlength ; j++ )
        reconstruction_prob[j] = fastsigmoid( reconstruction_activation[j] );



     // predict (denoised) input_reconstruction 
    transposeProduct(reconstruction_activation2, reconstruction_weights, reconstruction_prob); //dynamic matrice tied
    reconstruction_activation2 += reconstruction_bias2;

    for( int j=0 ; j<fullhiddenlength ; j++ )
        reconstruction_prob2[j] = fastsigmoid( reconstruction_activation2[j] );


    //hidden_layer->fprop(reconstruction_activation, reconstruction_prob);

    /********************************************************************************/
    hidden_reconstruction_activation_grad.resize(reconstruction_prob.size());
    hidden_reconstruction_activation_grad << reconstruction_prob2;
    hidden_reconstruction_activation_grad -= hidden_target;
    hidden_reconstruction_activation_grad *= hidden_reconstruction_cost_weight;
    

    productAcc(hidden_gradient2, reconstruction_weights, hidden_reconstruction_activation_grad); //dynamic matrice tied
    //transposeProductAcc(hidden_gradient, reconstruction_weights, hidden_reconstruction_activation_grad); //dynamic matrice not tied
    
    //update bias
    multiplyAcc(reconstruction_bias2, hidden_reconstruction_activation_grad, -lr);
    // update weight
    externalProductScaleAcc(acc_weights_gr, hidden, hidden_reconstruction_activation_grad, -lr); //dynamic matrice tied
    //externalProductScaleAcc(acc_weights_gr, hidden_reconstruction_activation_grad, hidden, -lr); //dynamic matrice not tied
    
    hidden_reconstruction_activation_grad.clear();

    //update bias
    for( int i=0 ; i<fullhiddenlength ; i++ )
    {
        real in_grad_i;
        in_grad_i = reconstruction_prob[i] * (1-reconstruction_prob[i]) * hidden_gradient2[i];
        hidden_reconstruction_activation_grad[i] += in_grad_i;
        
       
        // update the bias: bias -= learning_rate * input_gradient
        reconstruction_bias[i] -= lr * in_grad_i;
        
    }

    productAcc(hidden_gradient, reconstruction_weights, hidden_reconstruction_activation_grad); //dynamic matrice tied

    // update weight
    externalProductScaleAcc(acc_weights_gr, hidden, hidden_reconstruction_activation_grad, -lr); //dynamic matrice tied
    
  
    //update bias2
    //multiplyAcc(reconstruction_bias2, hidden_gradient, -lr);
    /********************************************************************************/
    // Vec hidden_reconstruction_activation_grad;
    /*hidden_reconstruction_activation_grad.clear();
    for(int k=0; k<reconstruction_prob.length(); k++){
        //    hidden_reconstruction_activation_grad[k] = safelog(1-reconstruction_prob[k]) - safelog(reconstruction_prob[k]);
        hidden_reconstruction_activation_grad[k] = - reconstruction_activation[k];
        }*/

    double result_cost = 0;
    double neg_log_cost = 0; // neg log softmax
    for(int k=0; k<reconstruction_prob.length(); k++){
        //if(hidden_target[k]!=0)
        neg_log_cost -= hidden_target[k]*safelog(reconstruction_prob[k]) + (1-hidden_target[k])*safelog(1-reconstruction_prob[k]);
    }
    result_cost = neg_log_cost;
    
    return result_cost;
}

Here is the call graph for this function:

double PLearn::DenoisingRecurrentNet::fpropHiddenSymmetricDynamicMatrix ( Vec  hidden,
Mat  reconstruction_weights,
Vec reconstruction_prob,
Vec  clean_input,
Vec  hidden_gradient,
double  hidden_reconstruction_cost_weight,
double  lr 
) [private]

Definition at line 1693 of file DenoisingRecurrentNet.cc.

References hidden_layer, hidden_reconstruction_cost_weight, PLearn::TVec< T >::length(), PLearn::productAcc(), PLearn::TVec< T >::resize(), PLearn::safelog(), PLearn::TVec< T >::size(), and PLearn::transposeProduct().

{
    // set appropriate sizes
    int fullinputlength = hidden_target.length();
    Vec reconstruction_activation;
   
    reconstruction_activation.resize(fullinputlength);
    reconstruction_prob.resize(fullinputlength);

    // predict (denoised) input_reconstruction 
    transposeProduct(reconstruction_activation, reconstruction_weights, hidden); //truc de stan
    //product(reconstruction_activation, reconstruction_weights, hidden); 
    //reconstruction_activation += hidden_layer->bias;
    
    hidden_layer->fprop(reconstruction_activation, reconstruction_prob);

    /********************************************************************************/
    Vec hidden_reconstruction_activation_grad;
    hidden_reconstruction_activation_grad.resize(reconstruction_prob.size());
    hidden_reconstruction_activation_grad << reconstruction_prob;
    hidden_reconstruction_activation_grad -= hidden_target;
    hidden_reconstruction_activation_grad *= hidden_reconstruction_cost_weight;

    productAcc(hidden_gradient, reconstruction_weights, hidden_reconstruction_activation_grad);
    /********************************************************************************/

    double result_cost = 0;
    double neg_log_cost = 0; // neg log softmax
    for(int k=0; k<reconstruction_prob.length(); k++)
        if(hidden_target[k]!=0)
            neg_log_cost -= hidden_target[k]*safelog(reconstruction_prob[k]);
    result_cost = neg_log_cost;
    
    return result_cost;
}

Here is the call graph for this function:

double PLearn::DenoisingRecurrentNet::fpropInputReconstructionFromHidden ( Vec  hidden,
Mat  reconstruction_weights,
Vec input_reconstruction_bias,
Vec input_reconstruction_prob,
Vec  clean_input 
) [private]

Builds input_reconstruction_prob from hidden (using reconstruction_weights which is nhidden x ninputs, and input_reconstruction_bias) Also computes neg log cost and returns it.

Definition at line 1385 of file DenoisingRecurrentNet.cc.

References applyMultipleSoftmaxToInputWindow(), PLearn::TVec< T >::clear(), encoding, PLearn::TVec< T >::length(), PLearn::TVec< T >::resize(), PLearn::safelog(), PLearn::softmax(), and PLearn::transposeProduct().

Referenced by fpropUpdateInputReconstructionFromHidden().

{
    // set appropriate sizes
    int fullinputlength = clean_input.length();
    Vec reconstruction_activation;
    if(reconstruction_bias.length()==0)
    {
        reconstruction_bias.resize(fullinputlength);
        reconstruction_bias.clear();
    }
    reconstruction_activation.resize(fullinputlength);
    reconstruction_prob.resize(fullinputlength);

    // predict (denoised) input_reconstruction 
    transposeProduct(reconstruction_activation, reconstruction_weights, hidden); 
    reconstruction_activation += reconstruction_bias;

    softmax(reconstruction_activation, reconstruction_prob);

        /*for( int j=0 ; j<fullinputlength ; j++ ){
        if(clean_input[j]==1 || clean_input[j]==0)
            reconstruction_prob[j] = fastsigmoid( reconstruction_activation[j] );
        else
            reconstruction_prob[j] = reconstruction_activation[j] ;
            }*/

    double result_cost = 0;
    if(encoding=="raw_masked_supervised") // || encoding=="generic") // complicated input format... consider it's squared error
    {
        double r = 0;
        double neg_log_cost = 0; // neg log softmax
        for(int k=0; k<reconstruction_prob.length(); k++){
            if(clean_input[k]==1 || clean_input[k]==0){
                neg_log_cost -= clean_input[k]*safelog(reconstruction_prob[k]) + (1-clean_input[k])*safelog(1-reconstruction_prob[k]);
            }                
            else{
                r = reconstruction_prob[k] - clean_input[k];
                neg_log_cost += r*r;
            }
            
            
        }
        result_cost = neg_log_cost;
        
        /*real r;
        //reconstruction_prob << reconstruction_activation;
        for(int i=0; i<reconstruction_activation.length(); i++){
            r = reconstruction_activation[i] - clean_input[i];
            result_cost += r*r;
            }*/
    }
    else // suppose it's a multiple softmax
    {
        applyMultipleSoftmaxToInputWindow(reconstruction_activation, reconstruction_prob);
    
        double neg_log_cost = 0; // neg log softmax
        for(int k=0; k<reconstruction_prob.length(); k++)
            if(clean_input[k]!=0)
                neg_log_cost -= clean_input[k]*safelog(reconstruction_prob[k]);
        result_cost = neg_log_cost;
    }
    return result_cost;
}

Here is the call graph for this function:

Here is the caller graph for this function:

double PLearn::DenoisingRecurrentNet::fpropUpdateInputReconstructionFromHidden ( Vec  hidden,
Mat reconstruction_weights,
Mat acc_weights_gr,
Vec input_reconstruction_bias,
Vec input_reconstruction_prob,
Vec  clean_input,
Vec  hidden_gradient,
double  input_reconstruction_cost_weight,
double  lr 
) [private]

Builds input_reconstruction_prob from hidden (using reconstruction_weights which is nhidden x ninputs, and input_reconstruction_bias) then backpropagates reconstruction cost (after comparison with clean_input) with learning rate input_reconstruction_lr accumulates gradient in hidden_gradient, and updates reconstruction_weights and input_reconstruction_bias Also computes neg log cost and returns it.

Definition at line 1373 of file DenoisingRecurrentNet.cc.

References fpropInputReconstructionFromHidden(), and updateInputReconstructionFromHidden().

Referenced by recurrentUpdate().

{
    double cost = fpropInputReconstructionFromHidden(hidden, reconstruction_weights, input_reconstruction_bias, input_reconstruction_prob, clean_input);
    updateInputReconstructionFromHidden(hidden, reconstruction_weights, acc_weights_gr, input_reconstruction_bias, input_reconstruction_prob, 
                                        clean_input, hidden_gradient, input_reconstruction_cost_weight, lr);
    return cost;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::generate ( int  t,
int  n 
)

Generate music in a folder.

Definition at line 2705 of file DenoisingRecurrentNet.cc.

References clamp_units(), PLearn::TVec< T >::clear(), data, dynamic_act_no_bias_contribution, dynamic_connections, end_of_sequence_symbol, PLearn::fast_exact_is_equal(), hidden2_act_no_bias_list, hidden2_list, hidden_act_no_bias_list, hidden_connections, hidden_layer, hidden_layer2, hidden_list, i, input_connections, input_layer, input_list, input_symbol_sizes, PLearn::PLearner::inputsize(), j, PLearn::TVec< T >::length(), masks_list, nll_list, outputsize(), PLearn::TVec< T >::resize(), PLearn::TMat< T >::resize(), PLearn::TVec< T >::size(), PLearn::TVec< T >::subVec(), target_connections, target_layers, target_layers_n_of_target_elements, target_layers_weights, target_prediction_act_no_bias_list, target_prediction_list, target_symbol_sizes, targets_list, PLearn::PLearner::targetsize(), and use_target_layers_masks.

{
    //PPath* the_filename = "/home/stan/Documents/recherche_maitrise/DDBN_bosendorfer/data/generate/scoreGen.amat";
    data = new AutoVMatrix();
    //data->filename = "/home/stan/Documents/recherche_maitrise/DDBN_bosendorfer/data/listData/target_tm12_input_t_tm12_tp12/scoreGen_tar_tm12__in_tm12_tp12.amat";
    //data->filename = "/home/stan/Documents/recherche_maitrise/DDBN_bosendorfer/create_data/scoreGenSuitePerf.amat";
    data->filename = "/home/stan/cvs/Gamme/expressive_data/dataGen.amat";

    data->defineSizes(163,16,0);
    //data->inputsize = 21;
    //data->targetsize = 0;
    //data->weightsize = 0;
    data->build();

    
    
   
   

    int len = data->length();
    int tarSize = outputsize();
    int partTarSize;
    Vec input;
    Vec target;
    real weight;
    int targsize;

    Vec output(outputsize());
    output.clear();
//     Vec costs(nTestCosts());
//     costs.clear();
//     Vec n_items(nTestCosts());
//     n_items.clear();

    int r,r2;
    use_target_layers_masks = true;

    int ith_sample_in_sequence = 0;
    int inputsize_without_masks = inputsize() 
        - ( use_target_layers_masks ? targetsize() : 0 );
    int sum_target_elements = 0;
    for (int i = 0; i < len; i++)
    {
        data->getExample(i, input, target, weight);
        if(i>n)
        {
            for (int k = 1; k <= t; k++)
            {
                if(k<=i){
                    partTarSize = outputsize();
                    for( int tar=0; tar < target_layers.length(); tar++ )
                    {
                        
                        input.subVec(inputsize_without_masks-(tarSize*(t-k))-partTarSize-1,target_layers[tar]->size) << target_prediction_list[tar](ith_sample_in_sequence-k);
                        partTarSize -= target_layers[tar]->size;
                        
                        
                    }
                }
            }       
        }
    

//         for (int k = 1; k <= t; k++)
//         {
//             partTarSize = outputsize();
//             for( int tar=0; tar < target_layers.length(); tar++ )
//             {
//                 if(i>=t){
//                     input.subVec(inputsize_without_masks-(tarSize*(t-k))-partTarSize-1,target_layers[tar]->size) << target_prediction_list[tar](ith_sample_in_sequence-k);
//                     partTarSize -= target_layers[tar]->size;
//                 }
//             }
//         }

        if( fast_exact_is_equal(input[0],end_of_sequence_symbol) )
        {
//             ith_sample_in_sequence = 0;
//             hidden_list.resize(0);
//             hidden_act_no_bias_list.resize(0);
//             hidden2_list.resize(0);
//             hidden2_act_no_bias_list.resize(0);
//             target_prediction_list.resize(0);
//             target_prediction_act_no_bias_list.resize(0);
//             input_list.resize(0);
//             targets_list.resize(0);
//             nll_list.resize(0,0);
//             masks_list.resize(0);

            

            continue;
        }

        // Resize internal variables
        hidden_list.resize(ith_sample_in_sequence+1, hidden_layer->size);
        hidden_act_no_bias_list.resize(ith_sample_in_sequence+1, hidden_layer->size);
        if( hidden_layer2 )
        {
            hidden2_list.resize(ith_sample_in_sequence+1, hidden_layer2->size);
            hidden2_act_no_bias_list.resize(ith_sample_in_sequence+1, hidden_layer2->size);
        }
                 
        input_list.resize(ith_sample_in_sequence+1);
        input_list[ith_sample_in_sequence].resize(input_layer->size);

        targets_list.resize( target_layers.length() );
        target_prediction_list.resize( target_layers.length() );
        target_prediction_act_no_bias_list.resize( target_layers.length() );
        for( int tar=0; tar < target_layers.length(); tar++ )
        {
            if( !fast_exact_is_equal(target_layers_weights[tar],0) )
            {
                targsize = target_layers[tar]->size;
                targets_list[tar].resize( ith_sample_in_sequence+1, targsize);
                //targets_list[tar][ith_sample_in_sequence].resize( target_layers[tar]->size);
                target_prediction_list[tar].resize(
                    ith_sample_in_sequence+1, targsize);
                target_prediction_act_no_bias_list[tar].resize(
                    ith_sample_in_sequence+1, targsize);
            }
        }
        nll_list.resize(ith_sample_in_sequence+1,target_layers.length());
        if( use_target_layers_masks )
        {
            masks_list.resize( target_layers.length() );
            for( int tar=0; tar < target_layers.length(); tar++ )
                if( !fast_exact_is_equal(target_layers_weights[tar],0) )
                    masks_list[tar].resize( ith_sample_in_sequence+1, target_layers[tar]->size );
        }

        // Forward propagation

        // Fetch right representation for input
        clamp_units(input.subVec(0,inputsize_without_masks),
                    input_layer,
                    input_symbol_sizes);                
        input_list[ith_sample_in_sequence] << input_layer->expectation;

        // Fetch right representation for target
        sum_target_elements = 0;
        for( int tar=0; tar < target_layers.length(); tar++ )
        {
            if( !fast_exact_is_equal(target_layers_weights[tar],0) )
            {
                if( use_target_layers_masks )
                {
                    Vec masks_list_tar_i = masks_list[tar](ith_sample_in_sequence);
                    clamp_units(target.subVec(
                                    sum_target_elements,
                                    target_layers_n_of_target_elements[tar]),
                                target_layers[tar],
                                target_symbol_sizes[tar],
                                input.subVec(
                                    inputsize_without_masks 
                                    + sum_target_elements, 
                                    target_layers_n_of_target_elements[tar]),
                                masks_list_tar_i
                        );
                    
                }
                else
                {
                    clamp_units(target.subVec(
                                    sum_target_elements,
                                    target_layers_n_of_target_elements[tar]),
                                target_layers[tar],
                                target_symbol_sizes[tar]);
                }
                targets_list[tar](ith_sample_in_sequence) << 
                    target_layers[tar]->expectation;
            }
            sum_target_elements += target_layers_n_of_target_elements[tar];
        }
        
        Vec hidden_act_no_bias_i = hidden_act_no_bias_list(ith_sample_in_sequence);
        input_connections->fprop( input_list[ith_sample_in_sequence], 
                                  hidden_act_no_bias_i);
                
        if( ith_sample_in_sequence > 0 && dynamic_connections )
        {
            dynamic_connections->fprop( 
                hidden_list(ith_sample_in_sequence-1),
                dynamic_act_no_bias_contribution );

            hidden_act_no_bias_list(ith_sample_in_sequence) += 
                dynamic_act_no_bias_contribution;
        }
        
        Vec hidden_i = hidden_list(ith_sample_in_sequence);
        hidden_layer->fprop( hidden_act_no_bias_i, 
                             hidden_i );

        Vec last_hidden = hidden_i;
                 
        if( hidden_layer2 )
        {
            Vec hidden2_i = hidden2_list(ith_sample_in_sequence); 
            Vec hidden2_act_no_bias_i = hidden2_act_no_bias_list(ith_sample_in_sequence);

            hidden_connections->fprop( 
                hidden2_i,
                hidden2_act_no_bias_i);

            hidden_layer2->fprop( 
                hidden2_act_no_bias_i,
                hidden2_i 
                );

            last_hidden = hidden2_i; // last hidden layer vec 
        }
           
       
        for( int tar=0; tar < target_layers.length(); tar++ )
        {
            if( !fast_exact_is_equal(target_layers_weights[tar],0) )
            {
                Vec target_prediction_i = target_prediction_list[tar](i);
                Vec target_prediction_act_no_bias_i = target_prediction_act_no_bias_list[tar](i);
                target_connections[tar]->fprop(
                    last_hidden,
                    target_prediction_act_no_bias_i
                    );
                target_layers[tar]->fprop(
                    target_prediction_act_no_bias_i,
                    target_prediction_i );
                if( use_target_layers_masks )
                    target_prediction_i *= masks_list[tar](ith_sample_in_sequence);
            }
        }
        

        

        sum_target_elements = 0;
        for( int tar=0; tar < target_layers.length(); tar++ )
        {
            if( !fast_exact_is_equal(target_layers_weights[tar],0) )
            {
                target_layers[tar]->activation << 
                    target_prediction_act_no_bias_list[tar](
                        ith_sample_in_sequence);
                target_layers[tar]->activation += target_layers[tar]->bias;
                target_layers[tar]->setExpectation(
                    target_prediction_list[tar](
                        ith_sample_in_sequence));
                nll_list(ith_sample_in_sequence,tar) = 
                    target_layers[tar]->fpropNLL( 
                        targets_list[tar](ith_sample_in_sequence) ); 
//                 costs[tar] += nll_list(ith_sample_in_sequence,tar);
                
//                 // Normalize by the number of things to predict
//                 if( use_target_layers_masks )
//                 {
//                     n_items[tar] += sum(
//                         input.subVec( inputsize_without_masks 
//                                       + sum_target_elements, 
//                                       target_layers_n_of_target_elements[tar]) );
//                 }
//                 else
//                 n_items[tar]++;
            }
            if( use_target_layers_masks )
                sum_target_elements += 
                    target_layers_n_of_target_elements[tar];
        }
        ith_sample_in_sequence++;

        

    }

//     ith_sample_in_sequence = 0;
//     hidden_list.resize(0);
//     hidden_act_no_bias_list.resize(0);
//     hidden2_list.resize(0);
//     hidden2_act_no_bias_list.resize(0);
//     target_prediction_list.resize(0);
//     target_prediction_act_no_bias_list.resize(0);
//     input_list.resize(0);
//     targets_list.resize(0);
//     nll_list.resize(0,0);
//     masks_list.resize(0);   

    
    //Vec tempo;
    //TVec<real> tempo;
    //tempo.resize(visible_layer->size);
    ofstream myfile;
    myfile.open ("/home/stan/Documents/recherche_maitrise/DDBN_bosendorfer/data/generate/test.txt");
    
    for (int i = 0; i < target_prediction_list[0].length() ; i++ ){
       
       
        for( int tar=0; tar < target_layers.length(); tar++ )
        {
            for (int j = 0; j < target_prediction_list[tar](i).length() ; j++ ){
                
                //if(i>n){
                    myfile << target_prediction_list[tar](i)[j] << " ";
                    // }
                    //else{
                    //    myfile << targets_list[tar](i)[j] << " ";
                    // }
                       
           
            }
        }
        myfile << "\n";
    }
     

     myfile.close();

}

Here is the call graph for this function:

void PLearn::DenoisingRecurrentNet::generateArtificial ( )

Generate music in a folder.

Definition at line 3021 of file DenoisingRecurrentNet.cc.

References clamp_units(), PLearn::TVec< T >::clear(), data, dynamic_act_no_bias_contribution, dynamic_connections, end_of_sequence_symbol, PLearn::fast_exact_is_equal(), hidden2_act_no_bias_list, hidden2_list, hidden_act_no_bias_list, hidden_connections, hidden_layer, hidden_layer2, hidden_list, i, input_connections, input_layer, input_list, input_symbol_sizes, PLearn::PLearner::inputsize(), j, PLearn::TVec< T >::length(), masks_list, nll_list, outputsize(), PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), PLearn::TVec< T >::subVec(), target_connections, target_layers, target_layers_n_of_target_elements, target_layers_weights, target_prediction_act_no_bias_list, target_prediction_list, target_symbol_sizes, targets_list, PLearn::PLearner::targetsize(), and use_target_layers_masks.

{
    //PPath* the_filename = "/home/stan/Documents/recherche_maitrise/DDBN_bosendorfer/data/generate/scoreGen.amat";
    data = new AutoVMatrix();
    //data->filename = "/home/stan/Documents/recherche_maitrise/DDBN_bosendorfer/data/listData/target_tm12_input_t_tm12_tp12/scoreGen_tar_tm12__in_tm12_tp12.amat";
    //data->filename = "/home/stan/Documents/recherche_maitrise/DDBN_bosendorfer/create_data/scoreGenSuitePerf.amat";
    //data->filename = "/home/stan/cvs/Gamme/expressive_data/dataGen.amat";
    data->filename = "/home/stan/Documents/recherche_maitrise/artificialData/generate/dataGen.amat";
    data->defineSizes(1,1,0);
    //data->defineSizes(163,16,0);
    //data->inputsize = 21;
    //data->targetsize = 0;
    //data->weightsize = 0;
    data->build();

    
    
   
   

    int len = data->length();
    int tarSize = outputsize();
    int partTarSize;
    Vec input;
    Vec target;
    real weight;
    int targsize;

    Vec output(outputsize());
    output.clear();
//     Vec costs(nTestCosts());
//     costs.clear();
//     Vec n_items(nTestCosts());
//     n_items.clear();

    int r,r2;
    use_target_layers_masks = false;

    int ith_sample_in_sequence = 0;
    int inputsize_without_masks = inputsize() 
        - ( use_target_layers_masks ? targetsize() : 0 );
    int sum_target_elements = 0;
    for (int i = 0; i < len; i++)
    {
        data->getExample(i, input, target, weight);
        /*if(i>n)
        {
            for (int k = 1; k <= t; k++)
            {
                if(k<=i){
                    partTarSize = outputsize();
                    for( int tar=0; tar < target_layers.length(); tar++ )
                    {
                        
                        input.subVec(inputsize_without_masks-(tarSize*(t-k))-partTarSize-1,target_layers[tar]->size) << target_prediction_list[tar](ith_sample_in_sequence-k);
                        partTarSize -= target_layers[tar]->size;
                        
                        
                    }
                }
            }       
            }*/
    

//         for (int k = 1; k <= t; k++)
//         {
//             partTarSize = outputsize();
//             for( int tar=0; tar < target_layers.length(); tar++ )
//             {
//                 if(i>=t){
//                     input.subVec(inputsize_without_masks-(tarSize*(t-k))-partTarSize-1,target_layers[tar]->size) << target_prediction_list[tar](ith_sample_in_sequence-k);
//                     partTarSize -= target_layers[tar]->size;
//                 }
//             }
//         }

        if( fast_exact_is_equal(input[0],end_of_sequence_symbol) )
        {
//             ith_sample_in_sequence = 0;
//             hidden_list.resize(0);
//             hidden_act_no_bias_list.resize(0);
//             hidden2_list.resize(0);
//             hidden2_act_no_bias_list.resize(0);
//             target_prediction_list.resize(0);
//             target_prediction_act_no_bias_list.resize(0);
//             input_list.resize(0);
//             targets_list.resize(0);
//             nll_list.resize(0,0);
//             masks_list.resize(0);

            

            continue;
        }

        // Resize internal variables
        hidden_list.resize(ith_sample_in_sequence+1, hidden_layer->size);
        hidden_act_no_bias_list.resize(ith_sample_in_sequence+1, hidden_layer->size);
        if( hidden_layer2 )
        {
            hidden2_list.resize(ith_sample_in_sequence+1, hidden_layer2->size);
            hidden2_act_no_bias_list.resize(ith_sample_in_sequence+1, hidden_layer2->size);
        }
                 
        input_list.resize(ith_sample_in_sequence+1);
        input_list[ith_sample_in_sequence].resize(input_layer->size);

        targets_list.resize( target_layers.length() );
        target_prediction_list.resize( target_layers.length() );
        target_prediction_act_no_bias_list.resize( target_layers.length() );
        for( int tar=0; tar < target_layers.length(); tar++ )
        {
            if( !fast_exact_is_equal(target_layers_weights[tar],0) )
            {
                targsize = target_layers[tar]->size;
                targets_list[tar].resize( ith_sample_in_sequence+1, targsize);
                //targets_list[tar][ith_sample_in_sequence].resize( target_layers[tar]->size);
                target_prediction_list[tar].resize(
                    ith_sample_in_sequence+1, targsize);
                target_prediction_act_no_bias_list[tar].resize(
                    ith_sample_in_sequence+1, targsize);
            }
        }
        nll_list.resize(ith_sample_in_sequence+1,target_layers.length());
        if( use_target_layers_masks )
        {
            masks_list.resize( target_layers.length() );
            for( int tar=0; tar < target_layers.length(); tar++ )
                if( !fast_exact_is_equal(target_layers_weights[tar],0) )
                    masks_list[tar].resize( ith_sample_in_sequence+1, target_layers[tar]->size );
        }

        // Forward propagation

        // Fetch right representation for input
        clamp_units(input.subVec(0,inputsize_without_masks),
                    input_layer,
                    input_symbol_sizes);                
        input_list[ith_sample_in_sequence] << input_layer->expectation;

        // Fetch right representation for target
        sum_target_elements = 0;
        for( int tar=0; tar < target_layers.length(); tar++ )
        {
            if( !fast_exact_is_equal(target_layers_weights[tar],0) )
            {
                if( use_target_layers_masks )
                {
                    Vec masks_list_tar_i = masks_list[tar](ith_sample_in_sequence);
                    clamp_units(target.subVec(
                                    sum_target_elements,
                                    target_layers_n_of_target_elements[tar]),
                                target_layers[tar],
                                target_symbol_sizes[tar],
                                input.subVec(
                                    inputsize_without_masks 
                                    + sum_target_elements, 
                                    target_layers_n_of_target_elements[tar]),
                                masks_list_tar_i
                        );
                    
                }
                else
                {
                    clamp_units(target.subVec(
                                    sum_target_elements,
                                    target_layers_n_of_target_elements[tar]),
                                target_layers[tar],
                                target_symbol_sizes[tar]);
                }
                targets_list[tar](ith_sample_in_sequence) << 
                    target_layers[tar]->expectation;
            }
            sum_target_elements += target_layers_n_of_target_elements[tar];
        }
        
        Vec hidden_act_no_bias_i = hidden_act_no_bias_list(ith_sample_in_sequence);
        input_connections->fprop( input_list[ith_sample_in_sequence], 
                                  hidden_act_no_bias_i);
                
        if( ith_sample_in_sequence > 0 && dynamic_connections )
        {
            dynamic_connections->fprop( 
                hidden_list(ith_sample_in_sequence-1),
                dynamic_act_no_bias_contribution );

            hidden_act_no_bias_list(ith_sample_in_sequence) += 
                dynamic_act_no_bias_contribution;
        }
        
        Vec hidden_i = hidden_list(ith_sample_in_sequence);
        hidden_layer->fprop( hidden_act_no_bias_i, 
                             hidden_i );

        Vec last_hidden = hidden_i;
                 
        if( hidden_layer2 )
        {
            Vec hidden2_i = hidden2_list(ith_sample_in_sequence); 
            Vec hidden2_act_no_bias_i = hidden2_act_no_bias_list(ith_sample_in_sequence);

            hidden_connections->fprop( 
                hidden2_i,
                hidden2_act_no_bias_i);

            hidden_layer2->fprop( 
                hidden2_act_no_bias_i,
                hidden2_i 
                );

            last_hidden = hidden2_i; // last hidden layer vec 
        }
           
       
        for( int tar=0; tar < target_layers.length(); tar++ )
        {
            if( !fast_exact_is_equal(target_layers_weights[tar],0) )
            {
                Vec target_prediction_i = target_prediction_list[tar](i);
                Vec target_prediction_act_no_bias_i = target_prediction_act_no_bias_list[tar](i);
                target_connections[tar]->fprop(
                    last_hidden,
                    target_prediction_act_no_bias_i
                    );
                target_layers[tar]->fprop(
                    target_prediction_act_no_bias_i,
                    target_prediction_i );
                if( use_target_layers_masks )
                    target_prediction_i *= masks_list[tar](ith_sample_in_sequence);
            }
        }
        

        

        sum_target_elements = 0;
        for( int tar=0; tar < target_layers.length(); tar++ )
        {
            if( !fast_exact_is_equal(target_layers_weights[tar],0) )
            {
                target_layers[tar]->activation << 
                    target_prediction_act_no_bias_list[tar](
                        ith_sample_in_sequence);
                target_layers[tar]->activation += target_layers[tar]->bias;
                target_layers[tar]->setExpectation(
                    target_prediction_list[tar](
                        ith_sample_in_sequence));
                nll_list(ith_sample_in_sequence,tar) = 
                    target_layers[tar]->fpropNLL( 
                        targets_list[tar](ith_sample_in_sequence) ); 
//                 costs[tar] += nll_list(ith_sample_in_sequence,tar);
                
//                 // Normalize by the number of things to predict
//                 if( use_target_layers_masks )
//                 {
//                     n_items[tar] += sum(
//                         input.subVec( inputsize_without_masks 
//                                       + sum_target_elements, 
//                                       target_layers_n_of_target_elements[tar]) );
//                 }
//                 else
//                 n_items[tar]++;
            }
            if( use_target_layers_masks )
                sum_target_elements += 
                    target_layers_n_of_target_elements[tar];
        }
        ith_sample_in_sequence++;

        

    }

//     ith_sample_in_sequence = 0;
//     hidden_list.resize(0);
//     hidden_act_no_bias_list.resize(0);
//     hidden2_list.resize(0);
//     hidden2_act_no_bias_list.resize(0);
//     target_prediction_list.resize(0);
//     target_prediction_act_no_bias_list.resize(0);
//     input_list.resize(0);
//     targets_list.resize(0);
//     nll_list.resize(0,0);
//     masks_list.resize(0);   

    
    //Vec tempo;
    //TVec<real> tempo;
    //tempo.resize(visible_layer->size);
    ofstream myfile;
    myfile.open ("/home/stan/Documents/recherche_maitrise/artificialData/generate/generationResult.txt");
    
    for (int i = 0; i < target_prediction_list[0].length() ; i++ ){
       
       
        for( int tar=0; tar < target_layers.length(); tar++ )
        {
            for (int j = 0; j < target_prediction_list[tar](i).length() ; j++ ){
                
                //if(i>n){
                    myfile << target_prediction_list[tar](i)[j] << " ";
                    myfile << targets_list[tar](i)[j] << " ";
                    // }
                    //else{
                    //    myfile << targets_list[tar](i)[j] << " ";
                    // }
                       
           
            }
        }
        myfile << "\n";
    }
     

     myfile.close();

}

Here is the call graph for this function:

int PLearn::DenoisingRecurrentNet::getDurationBit ( int  duration) [static]

Definition at line 2305 of file DenoisingRecurrentNet.cc.

Referenced by encode_onehot_diffNote_duration(), and encode_onehot_note_octav_duration().

{
    if(duration==5)  // map infrequent 5 to 4
        duration=4;
    return duration;
}

Here is the caller graph for this function:

Mat PLearn::DenoisingRecurrentNet::getDynamicConnectionsWeightMatrix ( ) [private]

Definition at line 1227 of file DenoisingRecurrentNet.cc.

References dynamic_connections, PLERROR, and PLearn::RBMMatrixConnection::weights.

Referenced by recurrentUpdate().

{
    RBMMatrixConnection* conn = dynamic_cast<RBMMatrixConnection*>((RBMConnection*)dynamic_connections);
    if(conn==0)
        PLERROR("Expecting input connection to be a RBMMatrixConnection. Je sais c'est sale, mais au point ou on est rendu..");
    return conn->weights;
}

Here is the caller graph for this function:

Mat PLearn::DenoisingRecurrentNet::getDynamicReconstructionConnectionsWeightMatrix ( ) [private]

Definition at line 1235 of file DenoisingRecurrentNet.cc.

References dynamic_reconstruction_connections, PLERROR, and PLearn::RBMMatrixConnection::weights.

Referenced by recurrentUpdate().

{
    RBMMatrixConnection* conn = dynamic_cast<RBMMatrixConnection*>((RBMConnection*)dynamic_reconstruction_connections);
    if(conn==0)
        PLERROR("Expecting input connection to be a RBMMatrixConnection. Je sais c'est sale, mais au point ou on est rendu..");
    return conn->weights;
}

Here is the caller graph for this function:

Mat PLearn::DenoisingRecurrentNet::getInputConnectionsWeightMatrix ( ) [private]

Definition at line 1219 of file DenoisingRecurrentNet.cc.

References input_connections, PLERROR, and PLearn::RBMMatrixConnection::weights.

Referenced by recurrentUpdate().

{
    RBMMatrixConnection* conn = dynamic_cast<RBMMatrixConnection*>((RBMConnection*)input_connections);
    if(conn==0)
        PLERROR("Expecting input connection to be a RBMMatrixConnection. Je sais c'est sale, mais au point ou on est rendu..");
    return conn->weights;
}

Here is the caller graph for this function:

static Vec PLearn::DenoisingRecurrentNet::getInputWindow ( Mat  sequence,
int  startpos,
int  winsize 
) [inline, static]

Definition at line 209 of file DenoisingRecurrentNet.h.

References PLearn::TMat< T >::subMatRows(), and PLearn::TMat< T >::toVec().

    { return sequence.subMatRows(startpos, winsize).toVec(); }

Here is the call graph for this function:

static void PLearn::DenoisingRecurrentNet::getNoteAndOctave ( int  midi_number,
int note,
int octave 
) [inline, static]

Definition at line 213 of file DenoisingRecurrentNet.h.

    {
        note = midi_number%12;
        octave = midi_number/12;
    }
OptionList & PLearn::DenoisingRecurrentNet::getOptionList ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 71 of file DenoisingRecurrentNet.cc.

OptionMap & PLearn::DenoisingRecurrentNet::getOptionMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 71 of file DenoisingRecurrentNet.cc.

RemoteMethodMap & PLearn::DenoisingRecurrentNet::getRemoteMethodMap ( ) const [virtual]

Reimplemented from PLearn::Object.

Definition at line 71 of file DenoisingRecurrentNet.cc.

void PLearn::DenoisingRecurrentNet::getSequence ( int  i,
Mat seq 
) const

Returns the ith sequence.

Definition at line 2273 of file DenoisingRecurrentNet.cc.

References i, PLearn::TMat< T >::resize(), PLearn::PLearner::train_set, trainset_boundaries, w, and PLearn::VMat::width().

Referenced by train(), and trainUnconditionalPredictor().

{ 
    int start = 0;
    if(i>0)
        start = trainset_boundaries[i-1]+1;
    int end = trainset_boundaries[i];
    int w = train_set->width();
    seq.resize(end-start, w);
    train_set->getMat(start,0,seq);
}

Here is the call graph for this function:

Here is the caller graph for this function:

Mat PLearn::DenoisingRecurrentNet::getTargetConnectionsWeightMatrix ( int  tar) [private]

Definition at line 1211 of file DenoisingRecurrentNet.cc.

References PLERROR, target_connections, and PLearn::RBMMatrixConnection::weights.

Referenced by recurrentUpdate().

{
    RBMMatrixConnection* conn = dynamic_cast<RBMMatrixConnection*>((RBMConnection*)target_connections[tar]);
    if(conn==0)
        PLERROR("Expecting input connection to be a RBMMatrixConnection. Je sais c'est sale, mais au point ou on est rendu..");
    return conn->weights;
}

Here is the caller graph for this function:

TVec< string > PLearn::DenoisingRecurrentNet::getTestCostNames ( ) const [virtual]

Returns the names of the costs computed by computeCostsFromOutpus (and thus the test method).

Implements PLearn::PLearner.

Definition at line 2684 of file DenoisingRecurrentNet.cc.

References PLearn::TVec< T >::append(), i, PLearn::TVec< T >::length(), target_layers, and PLearn::tostring().

Referenced by getTrainCostNames().

{
    TVec<string> cost_names(0);
    for( int i=0; i<target_layers.length(); i++ )
        cost_names.append("target" + tostring(i) + ".NLL");
    return cost_names;
}

Here is the call graph for this function:

Here is the caller graph for this function:

TVec< string > PLearn::DenoisingRecurrentNet::getTrainCostNames ( ) const [virtual]

Returns the names of the objective costs that the train method computes and for which it updates the VecStatsCollector train_stats.

Implements PLearn::PLearner.

Definition at line 2692 of file DenoisingRecurrentNet.cc.

References getTestCostNames().

Referenced by train(), and trainUnconditionalPredictor().

{
    return getTestCostNames();
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::inject_zero_forcing_noise ( Mat  sequence,
double  noise_prob 
) const

Definition at line 2466 of file DenoisingRecurrentNet.cc.

References PLearn::TMat< T >::data(), PLearn::TMat< T >::isCompact(), n, PLERROR, PLearn::PLearner::random_gen, and PLearn::TMat< T >::size().

Referenced by fpropHiddenReconstructionFromLastHidden(), and splitRawMaskedSupervisedSequence().

{
    if(!sequence.isCompact())
        PLERROR("Expected a compact sequence");
    real* p = sequence.data();
    int n = sequence.size();
    while(n--)
    {
        if(*p!=real(0.) && random_gen->uniform_sample()<noise_prob)
            *p = real(0.);
        ++p;
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::inject_zero_forcing_noise ( Vec  sequence,
double  noise_prob 
) const

Definition at line 2481 of file DenoisingRecurrentNet.cc.

References PLearn::TVec< T >::data(), n, PLearn::PLearner::random_gen, and PLearn::TVec< T >::size().

{
    
    real* p = sequence.data();
    int n = sequence.size();
    while(n--)
    {
        if(*p!=real(0.) && random_gen->uniform_sample()<noise_prob)
            *p = real(0.);
        ++p;
    }
}

Here is the call graph for this function:

void PLearn::DenoisingRecurrentNet::locateSequenceBoundaries ( VMat  dataset,
TVec< int > &  boundaries,
real  end_of_sequence_symbol 
) [static]

Definition at line 2292 of file DenoisingRecurrentNet.cc.

References PLearn::TVec< T >::append(), i, PLearn::VMat::length(), and PLearn::TVec< T >::resize().

Referenced by setTrainingSet(), and test().

{
    boundaries.resize(10000);
    boundaries.resize(0);
    int l = dataset->length();
    for(int i=0; i<l; i++)
    {
        if(dataset(i,0)==end_of_sequence_symbol)
            boundaries.append(i);
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::makeDeepCopyFromShallowCopy ( CopiesMap copies) [virtual]

Transforms a shallow copy into a deep copy.

Reimplemented from PLearn::PLearner.

Definition at line 511 of file DenoisingRecurrentNet.cc.

References acc_dynamic_connections_gr, acc_hidden_bias_gr, acc_input_connections_gr, acc_recons_bias_gr, acc_reconstruction_dynamic_connections_gr, acc_target_bias_gr, acc_target_connections_gr, bias_gradient, clean_encoded_seq, data, PLearn::deepCopyField(), dynamic_act_no_bias_contribution, dynamic_connections, dynamic_reconstruction_connections, encoded_seq, hidden2_act_no_bias_list, hidden2_list, hidden_act_no_bias_list, hidden_connections, hidden_gradient, hidden_layer, hidden_layer2, hidden_list, hidden_reconstruction_bias, hidden_reconstruction_bias2, hidden_reconstruction_prob, hidden_temporal_gradient, input_connections, input_layer, input_list, input_reconstruction_bias, input_reconstruction_prob, input_symbol_sizes, PLearn::PLearner::makeDeepCopyFromShallowCopy(), masks_list, mean_encoded_vec, nll_list, seq, target_connections, target_layers, target_layers_n_of_target_elements, target_layers_weights, target_prediction_act_no_bias_list, target_prediction_list, target_symbol_sizes, targets_list, testset_boundaries, trainset_boundaries, and visi_bias_gradient.

{
    inherited::makeDeepCopyFromShallowCopy(copies);

    // Public fields
    deepCopyField( target_layers_weights, copies );
    deepCopyField( input_layer, copies);
    deepCopyField( target_layers , copies);
    deepCopyField( hidden_layer, copies);
    deepCopyField( hidden_layer2 , copies);
    deepCopyField( dynamic_connections , copies);
    deepCopyField( dynamic_reconstruction_connections , copies);
    deepCopyField( hidden_connections , copies);
    deepCopyField( input_connections , copies);
    deepCopyField( target_connections , copies);
    deepCopyField( target_layers_n_of_target_elements, copies);
    deepCopyField( input_symbol_sizes, copies);
    deepCopyField( target_symbol_sizes, copies);
    deepCopyField( mean_encoded_vec, copies);
    deepCopyField( input_reconstruction_bias, copies);
    deepCopyField( hidden_reconstruction_bias, copies);
    deepCopyField( hidden_reconstruction_bias2, copies);

    // Protected fields
    deepCopyField( data, copies);
    deepCopyField( acc_target_connections_gr, copies);
    deepCopyField( acc_input_connections_gr, copies);
    deepCopyField( acc_dynamic_connections_gr, copies);
    deepCopyField( acc_reconstruction_dynamic_connections_gr, copies);
    deepCopyField( acc_target_bias_gr, copies);
    deepCopyField( acc_hidden_bias_gr, copies);
    deepCopyField( acc_recons_bias_gr, copies);
    deepCopyField( bias_gradient , copies);
    deepCopyField( visi_bias_gradient , copies);
    deepCopyField( hidden_gradient , copies);
    deepCopyField( hidden_temporal_gradient , copies);
    deepCopyField( hidden_list , copies);
    deepCopyField( hidden_act_no_bias_list , copies);
    deepCopyField( hidden2_list , copies);
    deepCopyField( hidden2_act_no_bias_list , copies);
    deepCopyField( target_prediction_list , copies);
    deepCopyField( target_prediction_act_no_bias_list , copies);
    deepCopyField( input_list , copies);
    deepCopyField( targets_list , copies);
    deepCopyField( nll_list , copies);
    deepCopyField( masks_list , copies);
    deepCopyField( dynamic_act_no_bias_contribution, copies);
    deepCopyField( trainset_boundaries, copies);
    deepCopyField( testset_boundaries, copies);
    deepCopyField( seq, copies);
    deepCopyField( encoded_seq, copies);
    deepCopyField( clean_encoded_seq, copies);
    deepCopyField( input_reconstruction_prob, copies);
    deepCopyField( hidden_reconstruction_prob, copies);
    

    // deepCopyField(, copies);

    //PLERROR("DenoisingRecurrentNet::makeDeepCopyFromShallowCopy(): "
    //"not implemented yet");
}

Here is the call graph for this function:

int PLearn::DenoisingRecurrentNet::nSequences ( ) const [inline]

Returns the number of sequences in the training_set.

Definition at line 262 of file DenoisingRecurrentNet.h.

References PLearn::TVec< T >::length(), and trainset_boundaries.

Referenced by train(), and trainUnconditionalPredictor().

    { return trainset_boundaries.length(); }

Here is the call graph for this function:

Here is the caller graph for this function:

int PLearn::DenoisingRecurrentNet::outputsize ( ) const [virtual]

Returns the size of this learner's output, (which typically may depend on its inputsize(), targetsize() and set options).

Implements PLearn::PLearner.

Definition at line 574 of file DenoisingRecurrentNet.cc.

References i, PLearn::TVec< T >::length(), and target_layers.

Referenced by generate(), generateArtificial(), and test().

{
    int out_size = 0;
    for( int i=0; i<target_layers.length(); i++ )
        out_size += target_layers[i]->size;
    return out_size;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::partition ( TVec< double >  part,
TVec< double >  periode,
TVec< double >  vel 
) const

Use the partition.

void PLearn::DenoisingRecurrentNet::recurrentFprop ( Vec  train_costs,
Vec  train_n_items,
bool  useDynamicConnections = true 
) const [private]

Definition at line 1125 of file DenoisingRecurrentNet.cc.

References dynamic_act_no_bias_contribution, dynamic_connections, PLearn::fast_exact_is_equal(), hidden2_act_no_bias_list, hidden2_list, hidden_act_no_bias_list, hidden_connections, hidden_layer, hidden_layer2, hidden_list, i, input_connections, input_list, PLearn::TVec< T >::length(), masks_list, nll_list, PLearn::sum(), target_connections, target_layers, target_layers_weights, target_prediction_act_no_bias_list, target_prediction_list, targets_list, and use_target_layers_masks.

Referenced by test(), and train().

{
    int l = input_list.length();
    int ntargets = target_layers.length();

    for(int i=0; i<l; i++ )
    {
        Vec hidden_act_no_bias_i = hidden_act_no_bias_list(i);
        input_connections->fprop( input_list[i], hidden_act_no_bias_i);
        if(useDynamicConnections){
            if( i > 0 && dynamic_connections )
            {
                Vec hidden_i_prev = hidden_list(i-1);
                dynamic_connections->fprop(hidden_i_prev,dynamic_act_no_bias_contribution );
                hidden_act_no_bias_i += dynamic_act_no_bias_contribution;
            }
        }
        Vec hidden_i = hidden_list(i);
        hidden_layer->fprop( hidden_act_no_bias_i, 
                             hidden_i);
        
        Vec last_hidden = hidden_i;

        if( hidden_layer2 )
        {
            Vec hidden2_i = hidden2_list(i); 
            Vec hidden2_act_no_bias_i = hidden2_act_no_bias_list(i);

            hidden_connections->fprop(hidden_i, hidden2_act_no_bias_i);            
            hidden_layer2->fprop(hidden2_act_no_bias_i, hidden2_i);

            last_hidden = hidden2_i; // last hidden layer vec 
        }

        for( int tar=0; tar < ntargets; tar++ )
        {
            if( !fast_exact_is_equal(target_layers_weights[tar],0) )
            {
                Vec target_prediction_i = target_prediction_list[tar](i);
                Vec target_prediction_act_no_bias_i = target_prediction_act_no_bias_list[tar](i);
                target_connections[tar]->fprop(last_hidden, target_prediction_act_no_bias_i);
                target_layers[tar]->fprop(target_prediction_act_no_bias_i, target_prediction_i);
                if( use_target_layers_masks )
                    target_prediction_i *= masks_list[tar](i);

                target_layers[tar]->activation << target_prediction_act_no_bias_i;
                target_layers[tar]->activation += target_layers[tar]->bias;
                target_layers[tar]->setExpectation(target_prediction_i);

                Vec target_vec = targets_list[tar](i);
                nll_list(i,tar) = target_layers[tar]->fpropNLL(target_vec); 
                train_costs[tar] += nll_list(i,tar);

                // Normalize by the number of things to predict
                if( use_target_layers_masks )
                    train_n_items[tar] += sum(masks_list[tar](i));
                else
                    train_n_items[tar]++;
            }
        }
    }
    //if(noise)
    //  inject_zero_forcing_noise(hidden_list, input_noise_prob);
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::recurrentUpdate ( real  input_reconstruction_weight,
real  hidden_reconstruction_cost_weight,
real  temporal_gradient_contribution,
real  prediction_cost_weight,
real  inputAndDynamicPart,
Vec  train_costs,
Vec  train_n_items 
)

Updates both the RBM parameters and the dynamic connections in the recurrent tuning phase, after the visible units have been clamped.

Definition at line 1862 of file DenoisingRecurrentNet.cc.

References acc_dynamic_connections_gr, acc_input_connections_gr, acc_reconstruction_dynamic_connections_gr, acc_target_connections_gr, bias_gradient, bpropUpdateConnection(), bpropUpdateHiddenLayer(), PLearn::TMat< T >::clear(), PLearn::TVec< T >::clear(), current_learning_rate, dynamic_connections, PLearn::fast_exact_is_equal(), fpropHiddenReconstructionFromLastHidden(), fpropUpdateInputReconstructionFromHidden(), getDynamicConnectionsWeightMatrix(), getDynamicReconstructionConnectionsWeightMatrix(), getInputConnectionsWeightMatrix(), getTargetConnectionsWeightMatrix(), hidden2_act_no_bias_list, hidden2_list, hidden_act_no_bias_list, hidden_connections, hidden_gradient, hidden_layer, hidden_layer2, hidden_list, hidden_reconstruction_bias, hidden_reconstruction_bias2, hidden_reconstruction_prob, hidden_temporal_gradient, i, input_connections, input_list, input_reconstruction_bias, input_reconstruction_prob, PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), masks_list, PLearn::multiplyAcc(), nll_list, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), target_connections, target_layers, target_layers_weights, target_prediction_act_no_bias_list, target_prediction_list, targets_list, updateTargetLayer(), use_target_layers_masks, and visi_bias_gradient.

Referenced by train().

{
    TVec < Mat> targetWeights ;
    Mat inputWeights;
    Mat dynamicWeights;
    Mat reconsWeights;
    targetWeights.resize(target_connections.length());
    for( int tar=0; tar<target_layers.length(); tar++)
    {
       targetWeights[tar] = getTargetConnectionsWeightMatrix(tar);
    }
    inputWeights = getInputConnectionsWeightMatrix();
    if(dynamic_connections )
    { 
        dynamicWeights = getDynamicConnectionsWeightMatrix();
        reconsWeights = getDynamicReconstructionConnectionsWeightMatrix();
    }
    acc_target_connections_gr.resize(target_connections.length());
    for( int tar=0; tar<target_layers.length(); tar++)
    {
        acc_target_connections_gr[tar].resize(target_connections[tar]->up_size, target_connections[tar]->down_size);
        acc_target_connections_gr[tar].clear();
    }
    acc_input_connections_gr.resize(input_connections->up_size, input_connections->down_size);
    acc_input_connections_gr.clear();
    if(dynamic_connections )
    { 
        acc_dynamic_connections_gr.resize(dynamic_connections->up_size, dynamic_connections->down_size);
        acc_dynamic_connections_gr.clear();
        acc_reconstruction_dynamic_connections_gr.resize(dynamic_connections->down_size, dynamic_connections->up_size);
        acc_reconstruction_dynamic_connections_gr.clear();
    }


    hidden_temporal_gradient.resize(hidden_layer->size);
    hidden_temporal_gradient.clear();
    for(int i=hidden_list.length()-1; i>=0; i--){   

        if( hidden_layer2 )
            hidden_gradient.resize(hidden_layer2->size);
        else
            hidden_gradient.resize(hidden_layer->size);
        hidden_gradient.clear();
        if( predic_cost_weight!=0 )
        {
            for( int tar=0; tar<target_layers.length(); tar++)
            {
                if( !fast_exact_is_equal(target_layers_weights[tar],0) )
                {
                    target_layers[tar]->activation << target_prediction_act_no_bias_list[tar](i);
                    target_layers[tar]->activation += target_layers[tar]->bias;
                    target_layers[tar]->setExpectation(target_prediction_list[tar](i));
                    target_layers[tar]->bpropNLL(targets_list[tar](i),nll_list(i,tar),bias_gradient);
                    bias_gradient *= predic_cost_weight;
                    if(use_target_layers_masks)
                        bias_gradient *= masks_list[tar](i);
                    //target_layers[tar]->update(bias_gradient);
                    updateTargetLayer( bias_gradient, 
                                       target_layers[tar]->bias, 
                                       target_layers[tar]->learning_rate );
                    //Mat targetWeights = getTargetConnectionsWeightMatrix(tar);
                    if( hidden_layer2 ){
                        //target_connections[tar]->bpropUpdate(hidden2_list(i),target_prediction_act_no_bias_list[tar](i),hidden_gradient, bias_gradient,true);
                        bpropUpdateConnection(hidden2_list(i),
                                              target_prediction_act_no_bias_list[tar](i),
                                              hidden_gradient, 
                                              bias_gradient,
                                              targetWeights[tar],
                                              acc_target_connections_gr[tar],
                                              target_connections[tar]->down_size,
                                              target_connections[tar]->up_size,
                                              target_connections[tar]->learning_rate,
                                              true,
                                              false);
                    }
                    else{
                        //target_connections[tar]->bpropUpdate(hidden_list(i),target_prediction_act_no_bias_list[tar](i),hidden_gradient, bias_gradient,true);
                        bpropUpdateConnection(hidden_list(i),
                                              target_prediction_act_no_bias_list[tar](i),
                                              hidden_gradient, 
                                              bias_gradient,
                                              targetWeights[tar],
                                              acc_target_connections_gr[tar],
                                              target_connections[tar]->down_size,
                                              target_connections[tar]->up_size,
                                              target_connections[tar]->learning_rate,
                                              true,
                                              false);
                    }
                }
            }

            if (hidden_layer2)
            {
                hidden_layer2->bpropUpdate(
                    hidden2_act_no_bias_list(i), hidden2_list(i),
                    bias_gradient, hidden_gradient);
                
                hidden_connections->bpropUpdate(
                    hidden_list(i),
                    hidden2_act_no_bias_list(i), 
                    hidden_gradient, bias_gradient);
            }
        }

        if(inputAndDynamicPart){   
            // Add contribution of input reconstruction cost in hidden_gradient
            if(input_reconstruction_weight!=0)
            {
                //Mat reconstruction_weights = getInputConnectionsWeightMatrix();
                //Vec clean_input = clean_encoded_seq.subMatRows(i, input_window_size).toVec();
                
                train_costs[train_costs.length()-2] += fpropUpdateInputReconstructionFromHidden(hidden_list(i), inputWeights, acc_input_connections_gr, input_reconstruction_bias, input_reconstruction_prob, 
                                                                           input_list[i], hidden_gradient, input_reconstruction_weight, current_learning_rate);
                train_n_items[train_costs.length()-2]++;
            }
            
            //if(i!=0 && dynamic_connections )
            if(i>1 && dynamic_connections )
            {   
                
                // Add contribution of hidden reconstruction cost in hidden_gradient
                Vec hidden_reconstruction_activation_grad;
                hidden_reconstruction_activation_grad.resize(hidden_layer->size);
                //Mat reconstruction_weights = getDynamicConnectionsWeightMatrix();
                if(hidden_reconstruction_weight!=0)
                {
                    //Vec hidden_reconstruction_activation_grad;
                    //Mat reconstruction_weights = getDynamicConnectionsWeightMatrix();
                    
                    //truc stan
                    //fpropHiddenSymmetricDynamicMatrix(hidden_list(i-1), reconstruction_weights, hidden_reconstruction_prob, hidden_list(i), hidden_gradient, hidden_reconstruction_weight, current_learning_rate);
                    
                    train_costs[train_costs.length()-1] += fpropHiddenReconstructionFromLastHidden(input_list[i], 
                                                                                                   hidden_list(i), 
                                                                                                   dynamicWeights, //reconsWeights, //dynamicWeights, 
                                                                                                   acc_dynamic_connections_gr, //acc_reconstruction_dynamic_connections_gr, //acc_dynamic_connections_gr, 
                                                                                                   hidden_reconstruction_bias, 
                                                                                                   hidden_reconstruction_bias2, 
                                                                                                   hidden_reconstruction_activation_grad, 
                                                                                                   hidden_reconstruction_prob, 
                                                                                                   hidden_list(i-1), 
                                                                                                   hidden_gradient, 
                                                                                                   hidden_reconstruction_weight, 
                                                                                                   current_learning_rate);
                    
                    
                    /*
                    train_costs[train_costs.length()-1] += fpropHiddenReconstructionFromLastHidden2(input_list[i], 
                                                                                                   hidden_list(i), 
                                                                                                   dynamicWeights, //reconsWeights, //dynamicWeights, 
                                                                                                   acc_dynamic_connections_gr, //acc_reconstruction_dynamic_connections_gr, //acc_dynamic_connections_gr, 
                                                                                                   hidden_reconstruction_bias, 
                                                                                                   hidden_reconstruction_bias2, 
                                                                                                   hidden_reconstruction_activation_grad, 
                                                                                                   hidden_reconstruction_prob, 
                                                                                                   hidden_list(i-2), 
                                                                                                   hidden_gradient, 
                                                                                                   hidden_reconstruction_weight, 
                                                                                                   current_learning_rate);
                    */

                    //fpropHiddenReconstructionFromLastHidden(hidden_list(i), reconsWeights, acc_reconstruction_dynamic_connections_gr, hidden_reconstruction_bias, hidden_reconstruction_activation_grad, hidden_reconstruction_prob, hidden_list(i-1), hidden_gradient, hidden_reconstruction_weight, current_learning_rate);
                    train_n_items[train_costs.length()-1]++;
                }
                
                
                // add contribution to gradient of next time step hidden layer
                if(temporal_gradient_contribution>0)
                { // add weighted contribution of hidden_temporal gradient to hidden_gradient
                    // It does this: hidden_gradient += temporal_gradient_contribution*hidden_temporal_gradient;
                    multiplyAcc(hidden_gradient, hidden_temporal_gradient, temporal_gradient_contribution);
                    
                }
                
                
                
                
               
                bpropUpdateHiddenLayer(hidden_act_no_bias_list(i), 
                                       hidden_list(i),
                                       hidden_temporal_gradient, 
                                       hidden_gradient,
                                       hidden_layer->bias, 
                                       hidden_layer->learning_rate );
                
                
                //input
                //if(hidden_reconstruction_weight==0)
                //{
                   
                    
                bpropUpdateConnection(input_list[i],
                                      hidden_act_no_bias_list(i), 
                                      visi_bias_gradient, 
                                      hidden_temporal_gradient,// Here, it should be activations - cond_bias, but doesn't matter
                                      inputWeights,
                                      acc_input_connections_gr,
                                      input_connections->down_size,
                                      input_connections->up_size,
                                      input_connections->learning_rate,
                                      false,
                                      true);
                    //}
                
                //Dynamic
                //if(input_reconstruction_weight==0)
                //{
                    /*bpropUpdateHiddenLayer(hidden_act_no_bias_list(i), 
                                       hidden_list(i),
                                       hidden_temporal_gradient, 
                                       hidden_gradient,
                                       hidden_layer->bias, 
                                       hidden_layer->learning_rate );*/

                bpropUpdateConnection(hidden_list(i-1),
                                      hidden_act_no_bias_list(i), // Here, it should be dynamic_act_no_bias_contribution, but doesn't matter because a RBMMatrixConnection::bpropUpdate doesn't use its second argument
                                      hidden_gradient, 
                                      hidden_temporal_gradient, 
                                      dynamicWeights,
                                      acc_dynamic_connections_gr,
                                      dynamic_connections->down_size,
                                      dynamic_connections->up_size,
                                      dynamic_connections->learning_rate,
                                      false,
                                      false);
                    //}
                
                hidden_temporal_gradient << hidden_gradient; 
                //if(hidden_reconstruction_weight!=0)
                //    hidden_temporal_gradient +=  hidden_reconstruction_activation_grad;
            }
            else
            {
                if(input_reconstruction_weight==0)
                {
                    bpropUpdateHiddenLayer(hidden_act_no_bias_list(i), 
                                           hidden_list(i),
                                           hidden_temporal_gradient, // Not really temporal gradient, but this is the final iteration...
                                           hidden_gradient,
                                           hidden_layer->bias, 
                                           hidden_layer->learning_rate );
                    
                    //input
                    bpropUpdateConnection(input_list[i],
                                          hidden_act_no_bias_list(i), 
                                          visi_bias_gradient, 
                                          hidden_temporal_gradient,// Here, it should be activations - cond_bias, but doesn't matter
                                          inputWeights,
                                          acc_input_connections_gr,
                                          input_connections->down_size,
                                          input_connections->up_size,
                                          input_connections->learning_rate,
                                          false,
                                          true);
                }
            }
        }
    }
    
    
    //update matrice's connections
    for( int tar=0; tar<target_layers.length(); tar++)
    {
        multiplyAcc(targetWeights[tar], acc_target_connections_gr[tar], 1);
    }
    multiplyAcc(inputWeights, acc_input_connections_gr, 1);
    
    if(dynamic_connections )
    {
        multiplyAcc(dynamicWeights, acc_dynamic_connections_gr, 1);
        //multiplyAcc(reconsWeights, acc_reconstruction_dynamic_connections_gr, 1);
    }
    
    


     /* int r;
    int modulo;
    if(input_reconstruction_weight!=0)
        modulo = 2;
    else
        modulo=3;
    
    r = rand() % modulo +1;
   
   
    if(r==1)
    {
        multiplyAcc(inputWeights, acc_input_connections_gr, 1);
    }
    else if (r==2){
        if(dynamic_connections )
        {
            multiplyAcc(dynamicWeights, acc_dynamic_connections_gr, 1);
            //multiplyAcc(reconsWeights, acc_reconstruction_dynamic_connections_gr, 1);
        }
    }
    else {
        //update matrice's connections
        for( int tar=0; tar<target_layers.length(); tar++)
        {
            multiplyAcc(targetWeights[tar], acc_target_connections_gr[tar], 1);
        }
        }*/
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::resize_lists ( int  l) const [private]

Definition at line 1054 of file DenoisingRecurrentNet.cc.

References hidden2_act_no_bias_list, hidden2_list, hidden_act_no_bias_list, hidden_layer, hidden_layer2, hidden_list, input_list, PLearn::TVec< T >::length(), nll_list, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), PLearn::TVec< T >::size(), target_layers, target_prediction_act_no_bias_list, and target_prediction_list.

Referenced by encode_artificialData(), encodeAndCreateSupervisedSequence(), encodeAndCreateSupervisedSequence2(), and splitRawMaskedSupervisedSequence().

{
    input_list.resize(l);
    hidden_list.resize(l, hidden_layer->size);
    hidden_act_no_bias_list.resize(l, hidden_layer->size);

    if( hidden_layer2 )
    {
        hidden2_list.resize(l, hidden_layer2->size);
        hidden2_act_no_bias_list.resize(l, hidden_layer2->size);
    }

    int ntargets = target_layers.length();
    target_prediction_list.resize( ntargets );
    target_prediction_act_no_bias_list.resize( ntargets );

    for( int tar=0; tar < ntargets; tar++ )
    {
        int targsize = target_layers[tar]->size;
        target_prediction_list[tar].resize(l, targsize);
        target_prediction_act_no_bias_list[tar].resize(l, targsize);
    }

    nll_list.resize(l,ntargets);
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::setLearningRate ( real  the_learning_rate)

Sets the learning of all layers and connections Remembers it by copying value to current_learning_rate.

Definition at line 2550 of file DenoisingRecurrentNet.cc.

References current_learning_rate, dynamic_connections, dynamic_reconstruction_connections, hidden_connections, hidden_layer, hidden_layer2, i, input_connections, input_layer, PLearn::TVec< T >::length(), target_connections, and target_layers.

Referenced by train().

{
    current_learning_rate = the_learning_rate;
    input_layer->setLearningRate( the_learning_rate );
    hidden_layer->setLearningRate( the_learning_rate );
    input_connections->setLearningRate( the_learning_rate );
    if( dynamic_connections ){
        //dynamic_connections->setLearningRate( dynamic_gradient_scale_factor*the_learning_rate ); 
        dynamic_connections->setLearningRate( the_learning_rate ); 
    }
    if( dynamic_reconstruction_connections ){
        //dynamic_reconstruction_connections->setLearningRate( dynamic_gradient_scale_factor*the_learning_rate ); 
        dynamic_reconstruction_connections->setLearningRate( the_learning_rate ); 
    }
    if( hidden_layer2 )
    {
        hidden_layer2->setLearningRate( the_learning_rate );
        hidden_connections->setLearningRate( the_learning_rate );
    }

    for( int i=0; i<target_layers.length(); i++ )
    {
        target_layers[i]->setLearningRate( the_learning_rate );
        target_connections[i]->setLearningRate( the_learning_rate );
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::setTrainingSet ( VMat  training_set,
bool  call_forget = true 
) [virtual]

Declares the training set.

Then calls build() and forget() if necessary. Also sets this learner's inputsize_ targetsize_ weightsize_ from those of the training_set. Note: You shouldn't have to override this in subclasses, except in maybe to forward the call to an underlying learner.

Reimplemented from PLearn::PLearner.

Definition at line 2285 of file DenoisingRecurrentNet.cc.

References end_of_sequence_symbol, locateSequenceBoundaries(), PLearn::PLearner::setTrainingSet(), and trainset_boundaries.

Here is the call graph for this function:

void PLearn::DenoisingRecurrentNet::splitRawMaskedSupervisedSequence ( Mat  seq,
bool  doNoise 
) const [private]

For the (backward testing) raw_masked_supervised case. Populates: input_list, targets_list, masks_list.

Definition at line 973 of file DenoisingRecurrentNet.cc.

References encoded_seq, i, inject_zero_forcing_noise(), input_list, input_noise_prob, PLearn::PLearner::inputsize(), PLearn::TMat< T >::length(), PLearn::TVec< T >::length(), masks_list, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), resize_lists(), PLearn::TVec< T >::size(), PLearn::TMat< T >::subMatColumns(), target_layers, targets_list, PLearn::PLearner::targetsize(), and PLearn::TMat< T >::width().

Referenced by encodeSequenceAndPopulateLists().

{
    int l = seq.length();
    resize_lists(l);
    int inputsize_without_masks = inputsize()-targetsize();
    Mat input_part;
    input_part.resize(seq.length(),inputsize_without_masks);
    input_part << seq.subMatColumns(0,inputsize_without_masks);
    Mat mask_part = seq.subMatColumns(inputsize_without_masks, targetsize());
    Mat target_part = seq.subMatColumns(inputsize_without_masks+targetsize(), targetsize());

    if(doNoise)
        inject_zero_forcing_noise(input_part, input_noise_prob);

    for(int i=0; i<l; i++)
        input_list[i] = input_part(i);

    int ntargets = target_layers.length();
    targets_list.resize(ntargets);
    masks_list.resize(ntargets);
    int startcol = 0; // starting column of next target in target_part and mask_part
    for(int k=0; k<ntargets; k++)
    {
        int targsize = target_layers[k]->size;
        targets_list[k] = target_part.subMatColumns(startcol, targsize);
        masks_list[k] = mask_part.subMatColumns(startcol, targsize);
        startcol += targsize;
    }

    encoded_seq.resize(input_part.length(), input_part.width());
    encoded_seq << input_part;
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::test ( VMat  testset,
PP< VecStatsCollector test_stats,
VMat  testoutputs = 0,
VMat  testcosts = 0 
) const [virtual]

Performs test on testset, updating test cost statistics, and optionally filling testoutputs and testcosts.

The default version repeatedly calls computeOutputAndCosts or computeCostsOnly. Note that neither test_stats->forget() nor test_stats->finalize() is called, so that you should call them yourself (respectively before and after calling this method) if you don't plan to accumulate statistics.

Reimplemented from PLearn::PLearner.

Definition at line 2595 of file DenoisingRecurrentNet.cc.

References PLearn::TVec< T >::clear(), encoded_seq, encodeSequenceAndPopulateLists(), end_of_sequence_symbol, PLearn::fast_exact_is_equal(), PLearn::TVec< T >::fill(), i, input_window_size, PLearn::TVec< T >::length(), PLearn::VMat::length(), locateSequenceBoundaries(), MISSING_VALUE, PLearn::PLearner::nTestCosts(), outputsize(), recurrentFprop(), PLearn::PLearner::report_progress, PLearn::TMat< T >::resize(), seq, PLearn::TVec< T >::size(), PLearn::TVec< T >::subVec(), target_layers, target_layers_weights, target_prediction_list, testset_boundaries, unconditionalFprop(), w, and PLearn::VMat::width().

{
    int len = testset.length();

    Vec output(outputsize());
    output.clear();

    Vec costs(nTestCosts());
    costs.clear();
    Vec n_items(nTestCosts());
    n_items.clear();

    PP<ProgressBar> pb;
    if (report_progress) 
        pb = new ProgressBar("Testing learner", len);

    if (len == 0) {
        // Empty test set: we give -1 cost arbitrarily.
        costs.fill(-1);
        test_stats->update(costs);
    }

    int w = testset->width();
    locateSequenceBoundaries(testset, testset_boundaries, end_of_sequence_symbol);
    int nseq = testset_boundaries.length();

    seq.resize(5000,2); // contains the current sequence
    encoded_seq.resize(5000, 4);


    int pos = 0; // position in testoutputs
    for(int i=0; i<nseq; i++)
    {
        int start = 0;
        if(i>0)
            start = testset_boundaries[i-1]+1;
        int end = testset_boundaries[i];
        int seqlen = end-start; // target_prediction_list[0].length();
        seq.resize(seqlen, w);
        testset->getMat(start,0,seq);
        encodeSequenceAndPopulateLists(seq, false);

        if(input_window_size==0)
            unconditionalFprop(costs, n_items);
        else
            recurrentFprop(costs, n_items);

        if (testoutputs)
        {
            for(int t=0; t<seqlen; t++)
            {
                int sum_target_layers_size = 0;
                for( int tar=0; tar < target_layers.length(); tar++ )
                {
                    if( !fast_exact_is_equal(target_layers_weights[tar],0) )
                    {
                        output.subVec(sum_target_layers_size,target_layers[tar]->size)
                            << target_prediction_list[tar](t);
                    }
                    sum_target_layers_size += target_layers[tar]->size;
                }
                testoutputs->putOrAppendRow(pos++, output);
            }
            output.fill(end_of_sequence_symbol);
            testoutputs->putOrAppendRow(pos++, output);
        }
        else
            pos += seqlen;

        if (report_progress)
            pb->update(pos);
    }

    for(int i=0; i<costs.length(); i++)
    {
        if( !fast_exact_is_equal(target_layers_weights[i],0) )
            costs[i] /= n_items[i];
        else
            costs[i] = MISSING_VALUE;
    }
    if (testcosts)
        testcosts->putOrAppendRow(0, costs);
    
    if (test_stats)
        test_stats->update(costs, 1.);
}

Here is the call graph for this function:

void PLearn::DenoisingRecurrentNet::train ( ) [virtual]

The role of the train method is to bring the learner up to stage==nstages, updating the train_stats collector with training costs measured on-line in the process.

Implements PLearn::PLearner.

Definition at line 669 of file DenoisingRecurrentNet.cc.

References classname(), PLearn::TVec< T >::clear(), dynamic_gradient_scale_factor, encodeSequenceAndPopulateLists(), PLearn::endl(), PLearn::fast_exact_is_equal(), getSequence(), getTrainCostNames(), hidden_noise_prob, hidden_reconstruction_cost_weight, hidden_reconstruction_lr, i, PLearn::PLearner::initTrain(), input_noise_prob, input_reconstruction_cost_weight, input_reconstruction_lr, input_window_size, PLearn::TVec< T >::length(), MISSING_VALUE, nb_stage_reconstruction, noise, noisy_recurrent_lr, nSequences(), PLearn::PLearner::nstages, prediction_cost_weight, recurrent_lr, recurrentFprop(), recurrentUpdate(), PLearn::PLearner::report_progress, PLearn::TVec< T >::resize(), PLearn::TMat< T >::resize(), seq, setLearningRate(), PLearn::PLearner::stage, target_layers_weights, PLearn::PLearner::train_stats, trainUnconditionalPredictor(), PLearn::ProgressBar::update(), and PLearn::PLearner::verbosity.

{
    if(input_window_size==0)
    {
        trainUnconditionalPredictor();
        return;
    }

    MODULE_LOG << "train() called " << endl;

    // reserve memory for sequences
    seq.resize(5000,2); // contains the current sequence

    // real weight = 0; // Unused
    Vec train_costs( getTrainCostNames().length() );
    train_costs.clear();
    Vec train_n_items( getTrainCostNames().length() );

    if( !initTrain() )
    {
        MODULE_LOG << "train() aborted" << endl;
        return;
    }

    ProgressBar* pb = 0;

    // clear stats of previous epoch
    train_stats->forget();


    /***** Recurrent phase *****/
    if( stage >= nstages )
        return;

    if( stage < nstages )
    {        

        MODULE_LOG << "Training the whole model" << endl;

        int init_stage = stage;
        //int end_stage = max(0,nstages-(rbm_nstages + dynamic_nstages));
        int end_stage = nstages;

        MODULE_LOG << "  stage = " << stage << endl;
        MODULE_LOG << "  end_stage = " << end_stage << endl;
        MODULE_LOG << "  input_noise_prob = " <<                input_noise_prob  << endl;              
        MODULE_LOG << "  input_reconstruction_lr = " <<         input_reconstruction_lr  << endl;       
        MODULE_LOG << "  hidden_noise_prob = " <<               hidden_noise_prob  << endl;             
        MODULE_LOG << "  hidden_reconstruction_lr = " <<        hidden_reconstruction_lr  << endl;      
        MODULE_LOG << "  noisy_recurrent_lr = " <<              noisy_recurrent_lr  << endl;            
        MODULE_LOG << "  dynamic_gradient_scale_factor = " <<   dynamic_gradient_scale_factor  << endl; 
        MODULE_LOG << "  recurrent_lr = " <<                    recurrent_lr  << endl;                  


        if( report_progress && stage < end_stage )
            pb = new ProgressBar( "Recurrent training phase of "+classname(),
                                  end_stage - init_stage );

        int nCost = 2;
        train_costs.resize(train_costs.length() + nCost);
        train_n_items.resize(train_n_items.length() + nCost);
        while(stage < end_stage)
        {
            train_costs.clear();
            train_n_items.clear();

            int nseq = nSequences();
            for(int i=0; i<nseq; i++)
            {

                if(input_noise_prob!=0 )
                    noise = true;
                else
                    noise = false;

                
                
                
                getSequence(i, seq);
                encodeSequenceAndPopulateLists(seq, false);

                
              
                //bool corrupt_input = false;//input_noise_prob!=0 && (noisy_recurrent_lr!=0 || input_reconstruction_lr!=0);

                //clean_encoded_seq.resize(encoded_seq.length(), encoded_seq.width());
                //clean_encoded_seq << encoded_seq;

                //if(corrupt_input)  // WARNING: encoded_sequence will be dirty!!!!
                      //  inject_zero_forcing_noise(encoded_seq, input_noise_prob);

                // recurrent no noise phase
                if(stage >= nb_stage_reconstruction){
                    if(recurrent_lr!=0)
                    {
                        
                        setLearningRate( recurrent_lr );                    
                        recurrentFprop(train_costs, train_n_items);
                        recurrentUpdate(0,0,1, prediction_cost_weight,1, train_costs, train_n_items );
                        
                    }
                }

                if(stage < nb_stage_reconstruction || nb_stage_reconstruction == 0 ){
                

                    // greedy phase hidden
                    if(hidden_reconstruction_lr!=0){
                        
                        setLearningRate( hidden_reconstruction_lr);
                        
                        recurrentFprop(train_costs, train_n_items, true);
                        //recurrentUpdate(0, hidden_reconstruction_cost_weight, 1, 0,1, train_costs, train_n_items );
                        recurrentUpdate(0, hidden_reconstruction_cost_weight, 1, 0,1, train_costs, train_n_items );
                    }
                
                    /*if(recurrent_lr!=0)
                      {                 
                      setLearningRate( recurrent_lr );                    
                      recurrentFprop(train_costs, train_n_items);
                      //recurrentUpdate(0,0,1, prediction_cost_weight,0, train_costs, train_n_items );
                      recurrentUpdate(0,0,0, prediction_cost_weight,0, train_costs, train_n_items );
                      
                      }*/
                    
                    // greedy phase input
                    if(input_reconstruction_lr!=0){
                        if (noise)
                            encodeSequenceAndPopulateLists(seq, true);
                        setLearningRate( input_reconstruction_lr );
                        recurrentFprop(train_costs, train_n_items, false);
                        if (noise)
                            encodeSequenceAndPopulateLists(seq, false);
                        //recurrentUpdate(input_reconstruction_cost_weight, 0, 1, 0,1, train_costs, train_n_items );
                        recurrentUpdate(input_reconstruction_cost_weight, 0, 1, 0,1, train_costs, train_n_items );
                    }
                    
                    
                    
                    
                }

                // recurrent no noise phase
                /*if(stage>=nb_stage_reconstruction && stage<nb_stage_target+nb_stage_reconstruction){
                    if(recurrent_lr!=0)
                    {
                        
                        if(noise) // need to recover the clean sequence                        
                            encoded_seq << clean_encoded_seq;                  
                        setLearningRate( recurrent_lr );                    
                        recurrentFprop(train_costs, train_n_items);
                        recurrentUpdate(0,0,1, prediction_cost_weight,0, train_costs, train_n_items );
                        
                        }
                    }*/

                


                // recurrent noisy phase
                if(noisy_recurrent_lr!=0)
                {
                    setLearningRate( noisy_recurrent_lr );
                    recurrentFprop(train_costs, train_n_items);
                    recurrentUpdate(input_reconstruction_cost_weight, hidden_reconstruction_cost_weight, 1,1, prediction_cost_weight, train_costs, train_n_items );
                }

                
            }
            noise= false;
            if( pb )
                pb->update( stage + 1 - init_stage);
            
            //double totalCosts = 0;
            for(int i=0; i<train_costs.length(); i++)
            {
                
                if (train_costs[i] <= 0 || train_n_items[i] <= 0 ){
                    train_costs[i] = 1;
                    train_n_items[i] = 1; 
                }
                
                if (i < target_layers_weights.length()){
                    if( !fast_exact_is_equal(target_layers_weights[i],0) ){
                        train_costs[i] /= train_n_items[i];
                        //totalCosts += train_costs[i]*target_layers_weights[i];
                    }
                    else
                        train_costs[i] = MISSING_VALUE;
                }
                
                if (i == train_costs.length()-nCost ){
                    train_costs[i] /= train_n_items[i];
                    //totalCosts += train_costs[i]*input_reconstruction_cost_weight;
                }
                else if (i == train_costs.length()-1)
                    train_costs[i] /= train_n_items[i];
                
            }

            if(verbosity>0)
                cout << "mean costs at stage " << stage << 
                    " = " << train_costs << endl;
            stage++;
            train_stats->update(train_costs);
        }

        if( pb )
        {
            delete pb;
            pb = 0;
        }
    }

    train_stats->finalize();        
}

Here is the call graph for this function:

void PLearn::DenoisingRecurrentNet::trainUnconditionalPredictor ( ) [private]

Definition at line 610 of file DenoisingRecurrentNet.cc.

References PLearn::TVec< T >::clear(), encoded_seq, encodeSequenceAndPopulateLists(), PLearn::endl(), PLearn::TVec< T >::fill(), getSequence(), getTrainCostNames(), i, PLearn::PLearner::initTrain(), PLearn::TMat< T >::length(), mean_encoded_vec, nSequences(), PLearn::PLearner::nstages, PLearn::PLearner::report_progress, PLearn::TMat< T >::resize(), PLearn::TVec< T >::resize(), seq, PLearn::PLearner::stage, PLearn::PLearner::train_stats, and PLearn::TMat< T >::width().

Referenced by train().

{
    MODULE_LOG << "trainUnconditionalPredictor() called " << endl;

    // reserve memory for sequences
    seq.resize(5000,2); // contains the current sequence

    // real weight = 0; // Unused
    Vec train_costs( getTrainCostNames().length() );
    train_costs.fill(-1);

    if( !initTrain() )
    {
        MODULE_LOG << "train() aborted" << endl;
        return;
    }


    if( stage==0 && nstages==1 )
    {        
        // clear stats of previous epoch
        train_stats->forget();


        int nvecs = 0;
        int nseq = nSequences();        

        ProgressBar* pb = 0;
        if( report_progress)
            pb = new ProgressBar( "Sequences ",nseq);
        for(int i=0; i<nseq; i++)
        {
            getSequence(i, seq);
            encodeSequenceAndPopulateLists(seq, false);
            if(i==0)
            {
                mean_encoded_vec.resize(encoded_seq.width());
                mean_encoded_vec.clear();
            }
            for(int t=0; t<encoded_seq.length(); t++)
            {
                mean_encoded_vec += encoded_seq(t);                
                nvecs++;
            }
        }
        mean_encoded_vec *= 1./nvecs;            
        train_stats->update(train_costs);
        train_stats->finalize();            

        if( pb )
        {
            delete pb;
            pb = 0;
        }
        ++stage;
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::unconditionalFprop ( Vec  train_costs,
Vec  train_n_items 
) const [private]

Definition at line 1082 of file DenoisingRecurrentNet.cc.

References PLearn::endl(), i, input_list, PLearn::TVec< T >::length(), mean_encoded_vec, nll_list, PLearn::perr, PLERROR, PLWARNING, PLearn::TVec< T >::resize(), PLearn::safelog(), target_prediction_list, and targets_list.

Referenced by test().

{
    int pred_size = mean_encoded_vec.length();
    if(pred_size<=0)
        PLERROR("mean_encoded_vec not properly initialized. Did you call trainUnconditionalPredictor prior to unconditionalFprop ?");

    int l = input_list.length();
    int tar = 0;
    train_n_items[tar] += l;
    target_prediction_list[tar].resize(l,pred_size);
    for(int i=0; i<l; i++)
    {        
        Vec target_prediction_i = target_prediction_list[tar](i);
        target_prediction_i << mean_encoded_vec;
        Vec target_vec = targets_list[tar](i);

        /*
        target_layers[tar]->setExpectation(target_prediction_i);
        nll_list(i,tar) = target_layers[tar]->fpropNLL(target_vec); 
        */
        double nllcost = 0;
        for(int k=0; k<target_vec.length(); k++)
            if(target_vec[k]!=0)
                nllcost -= target_vec[k]*safelog(target_prediction_i[k]);
        nll_list(i,tar) = nllcost;

        if (isinf(nll_list(i,tar)))
        {
            PLWARNING("Row %d of sequence of length %d lead to inf cost",i,l);
            perr << "Problem at positions (vec of length " << target_vec.length() << "): ";
            for(int k=0; k<target_vec.length(); k++)
                if(target_vec[k]!=0 && target_prediction_i[k]==0)
                    perr << k << " ";
            perr << endl;
            // perr << "target_vec = " << target_vec << endl;
            // perr << "target_prediction_i = " << target_prediction_i << endl;
        }
        else
            train_costs[tar] += nll_list(i,tar);
    }
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::updateInputReconstructionFromHidden ( Vec  hidden,
Mat reconstruction_weights,
Mat acc_weights_gr,
Vec input_reconstruction_bias,
Vec  input_reconstruction_prob,
Vec  clean_input,
Vec  hidden_gradient,
double  input_reconstruction_cost_weight,
double  lr 
) [private]

Backpropagates reconstruction cost (after comparison with clean_input) with learning rate input_reconstruction_lr accumulates gradient in hidden_gradient, and updates reconstruction_weights and input_reconstruction_bias.

Definition at line 1452 of file DenoisingRecurrentNet.cc.

References PLearn::externalProductScaleAcc(), input_reconstruction_cost_weight, input_reconstruction_prob, PLearn::multiplyAcc(), and PLearn::productAcc().

Referenced by fpropUpdateInputReconstructionFromHidden().

{
    // gradient of -log softmax is just  output_of_softmax - onehot_target
    // so let's accumulate this in hidden_gradient
    Vec input_reconstruction_activation_grad = input_reconstruction_prob;
    input_reconstruction_activation_grad -= clean_input;
    input_reconstruction_activation_grad *= input_reconstruction_cost_weight;

    // update bias
    multiplyAcc(input_reconstruction_bias, input_reconstruction_activation_grad, -lr);

    // update weight
    // THIS IS COMMENTED OUT BECAUSE THE reconstruction_weights ARE tied (same) TO THE input_connection weights, 
    // WHICH GET UPDATED LATER IN recurrentUpdate SO IF WE UPDATE THEM HERE THEY WOULD GET UPDATED TWICE.
    // WARNING: THIS WOULD NO LONGER BE THE CASE IF THEY WERE NOT TIED!
    externalProductScaleAcc(acc_weights_gr, hidden, input_reconstruction_activation_grad, -lr);

    // accumulate in hidden_gradient
    productAcc(hidden_gradient, reconstruction_weights, input_reconstruction_activation_grad);
}

Here is the call graph for this function:

Here is the caller graph for this function:

void PLearn::DenoisingRecurrentNet::updateTargetLayer ( Vec grad,
Vec bias,
real lr 
) [private]

Definition at line 1243 of file DenoisingRecurrentNet.cc.

References b, PLearn::TVec< T >::data(), i, and PLearn::TVec< T >::length().

Referenced by recurrentUpdate().

{
    real* b = bias.data();
    real* gb = grad.data();
    int size = bias.length();

    for( int i=0 ; i<size ; i++ )
    {
        
        b[i] -= lr * gb[i];
        
    }

   
}

Here is the call graph for this function:

Here is the caller graph for this function:


Member Data Documentation

Reimplemented from PLearn::PLearner.

Definition at line 328 of file DenoisingRecurrentNet.h.

Definition at line 347 of file DenoisingRecurrentNet.h.

Referenced by makeDeepCopyFromShallowCopy(), and recurrentUpdate().

Stores accumulate hidden bias gradient.

Definition at line 355 of file DenoisingRecurrentNet.h.

Referenced by makeDeepCopyFromShallowCopy().

Definition at line 345 of file DenoisingRecurrentNet.h.

Referenced by makeDeepCopyFromShallowCopy(), and recurrentUpdate().

Stores accumulate reconstruction bias gradient.

Definition at line 358 of file DenoisingRecurrentNet.h.

Referenced by makeDeepCopyFromShallowCopy().

Definition at line 349 of file DenoisingRecurrentNet.h.

Referenced by makeDeepCopyFromShallowCopy(), and recurrentUpdate().

Stores accumulate target bias gradient.

Definition at line 352 of file DenoisingRecurrentNet.h.

Referenced by makeDeepCopyFromShallowCopy().

Definition at line 343 of file DenoisingRecurrentNet.h.

Referenced by makeDeepCopyFromShallowCopy(), and recurrentUpdate().

Stores bias gradient.

Definition at line 361 of file DenoisingRecurrentNet.h.

Referenced by makeDeepCopyFromShallowCopy(), and recurrentUpdate().

Definition at line 412 of file DenoisingRecurrentNet.h.

Referenced by makeDeepCopyFromShallowCopy().

Definition at line 338 of file DenoisingRecurrentNet.h.

Referenced by recurrentUpdate(), and setLearningRate().

Store external data;.

Definition at line 341 of file DenoisingRecurrentNet.h.

Referenced by generate(), generateArtificial(), and makeDeepCopyFromShallowCopy().

Definition at line 151 of file DenoisingRecurrentNet.h.

Referenced by declareOptions(), and train().

The RBMConnection for the reconstruction between the hidden layers, through time.

Definition at line 102 of file DenoisingRecurrentNet.h.

Referenced by build_(), declareOptions(), forget(), getDynamicReconstructionConnectionsWeightMatrix(), makeDeepCopyFromShallowCopy(), and setLearningRate().

Chooses what type of encoding to apply to an input sequence Possibilities: "timeframe", "note_duration", "note_octav_duration", "raw_masked_supervised".

Definition at line 127 of file DenoisingRecurrentNet.h.

Referenced by build_(), declareOptions(), encodeSequence(), encodeSequenceAndPopulateLists(), and fpropInputReconstructionFromHidden().

Value of the first input component for end-of-sequence delimiter.

Definition at line 84 of file DenoisingRecurrentNet.h.

Referenced by declareOptions(), generate(), generateArtificial(), setTrainingSet(), and test().

List of second hidden layers values.

Definition at line 380 of file DenoisingRecurrentNet.h.

Referenced by generate(), generateArtificial(), makeDeepCopyFromShallowCopy(), recurrentFprop(), recurrentUpdate(), and resize_lists().

The RBMConnection between the first and second hidden layers (optional)

Definition at line 105 of file DenoisingRecurrentNet.h.

Referenced by build_(), declareOptions(), forget(), generate(), generateArtificial(), makeDeepCopyFromShallowCopy(), recurrentFprop(), recurrentUpdate(), and setLearningRate().

Stores hidden gradient of dynamic connections.

Definition at line 367 of file DenoisingRecurrentNet.h.

Referenced by makeDeepCopyFromShallowCopy(), and recurrentUpdate().

List of hidden layers values.

Definition at line 374 of file DenoisingRecurrentNet.h.

Referenced by generate(), generateArtificial(), makeDeepCopyFromShallowCopy(), recurrentFprop(), recurrentUpdate(), and resize_lists().

Definition at line 144 of file DenoisingRecurrentNet.h.

Referenced by declareOptions(), and train().

Definition at line 163 of file DenoisingRecurrentNet.h.

Referenced by makeDeepCopyFromShallowCopy(), and recurrentUpdate().

Definition at line 165 of file DenoisingRecurrentNet.h.

Referenced by makeDeepCopyFromShallowCopy(), and recurrentUpdate().

Definition at line 145 of file DenoisingRecurrentNet.h.

Referenced by declareOptions(), and train().

Definition at line 416 of file DenoisingRecurrentNet.h.

Referenced by makeDeepCopyFromShallowCopy(), and recurrentUpdate().

Stores hidden gradient of dynamic connections coming from time t+1.

Definition at line 370 of file DenoisingRecurrentNet.h.

Referenced by makeDeepCopyFromShallowCopy(), and recurrentUpdate().

The input layer of the model.

Definition at line 87 of file DenoisingRecurrentNet.h.

Referenced by build_(), declareOptions(), forget(), generate(), generateArtificial(), makeDeepCopyFromShallowCopy(), and setLearningRate().

Definition at line 160 of file DenoisingRecurrentNet.h.

Referenced by forget(), makeDeepCopyFromShallowCopy(), and recurrentUpdate().

Definition at line 143 of file DenoisingRecurrentNet.h.

Referenced by declareOptions(), and train().

Number of symbols for each symbolic field of train_set.

Definition at line 120 of file DenoisingRecurrentNet.h.

Referenced by build_(), declareOptions(), generate(), generateArtificial(), and makeDeepCopyFromShallowCopy().

Optional (default=0) factor of L1 regularization term.

Definition at line 132 of file DenoisingRecurrentNet.h.

Referenced by applyWeightPenalty(), bpropUpdateConnection(), and declareOptions().

Optional (default=0) factor of L2 regularization term.

Definition at line 135 of file DenoisingRecurrentNet.h.

Referenced by applyWeightPenalty(), bpropUpdateConnection(), and declareOptions().

Definition at line 171 of file DenoisingRecurrentNet.h.

Referenced by declareOptions(), and train().

Definition at line 172 of file DenoisingRecurrentNet.h.

Referenced by declareOptions().

List of the nll of the input samples in a sequence.

Definition at line 398 of file DenoisingRecurrentNet.h.

Referenced by generate(), generateArtificial(), makeDeepCopyFromShallowCopy(), recurrentFprop(), recurrentUpdate(), resize_lists(), and unconditionalFprop().

Definition at line 129 of file DenoisingRecurrentNet.h.

Referenced by train().

Definition at line 150 of file DenoisingRecurrentNet.h.

Referenced by declareOptions(), and train().

Definition at line 167 of file DenoisingRecurrentNet.h.

Referenced by declareOptions(), and train().

Definition at line 154 of file DenoisingRecurrentNet.h.

Referenced by declareOptions(), and train().

Number of elements in the target part of a VMatrix associated to each target layer.

Definition at line 117 of file DenoisingRecurrentNet.h.

Referenced by build_(), declareOptions(), generate(), generateArtificial(), and makeDeepCopyFromShallowCopy().

The training weights of each target layers.

Definition at line 76 of file DenoisingRecurrentNet.h.

Referenced by build_(), declareOptions(), generate(), generateArtificial(), makeDeepCopyFromShallowCopy(), recurrentFprop(), recurrentUpdate(), test(), and train().

Number of symbols for each symbolic field of train_set.

Definition at line 123 of file DenoisingRecurrentNet.h.

Referenced by build_(), declareOptions(), generate(), generateArtificial(), and makeDeepCopyFromShallowCopy().

Definition at line 408 of file DenoisingRecurrentNet.h.

Referenced by makeDeepCopyFromShallowCopy(), and test().

Definition at line 146 of file DenoisingRecurrentNet.h.

Referenced by declareOptions().

Definition at line 141 of file DenoisingRecurrentNet.h.

Referenced by declareOptions().

Indication that a mask indicating which target to predict is present in the input part of the VMatrix dataset.

no loinger an option: this is set to true only for (encoding=="raw_masked_supervised")

Definition at line 81 of file DenoisingRecurrentNet.h.

Referenced by build_(), encodeAndCreateSupervisedSequence(), encodeAndCreateSupervisedSequence2(), generate(), generateArtificial(), recurrentFprop(), and recurrentUpdate().

Stores bias gradient.

Definition at line 364 of file DenoisingRecurrentNet.h.

Referenced by makeDeepCopyFromShallowCopy(), and recurrentUpdate().


The documentation for this class was generated from the following files:
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Defines