One major challenge in using statistical sequence learning methods in the domain of music lies in bridging the long timelags that separate important
musical events. Consider, for example, the chord changes that convey the basic structure of a pop song. A sequence learner that cannot predict chord
changes will almost certainly not be able to generate new examples in a musical style or to categorize songs by style. Yet, it is surprisingly
difficult for a sequence learner to bridge the long timelags necessary to identify when a chord change will occur and what its new value will be. This
is the case because chord changes can be separated by dozens or hundreds of intervening notes. One could solve this problem by treating chords as
being special (as did Mozer, NIPS 1991). But this is impractical---it requires chords to be labeled specially in the dataset, limiting the
applicability of the model to non-labeled examples---and furthermore does not address the general issue of nested temporal structure in music.
I will
briefly describe this temporal structure (known commonly as "meter") and present a model that uses to its advantage an assumption that sequences are
metrical. The model consists of an autocorrelation-based filtration that estimates online the most likely metrical tree (i.e. the frequency and phase
of beat, measure, phrase, etc.) and uses that to generate a series of sequences varying at different rates. These sequences correspond to each level
in the hierarchy. Multiple learners can be used to treat each series separately and their predictions can be combined to perform composition and
categorization. I will present preliminary results that demonstrate the usefulness of this approach. Time permitting I will also compare the model
to alternate approaches.