Research Overview

Note: this is taken directly from an email I sent to a friend from high school. I modified it slightly for spelling errors, clarity.

Indeed I'm working at the intersection of machine learning and music. Machine learning is basically a kind of artificial intelligence. What distinguishes it is a focus on fiinding statistical regularities in data as opposed to cooking up sets of rules or complex computer programs. en.wikipedia.org/wiki/Machine_learning

I'm interested in a number of areas. All of them have to do with music but they're not necessarily related other than that. One of them is in measuring similarity between two audio files. This seems dry and stupid, perhaps, but it's an interesting problem once investigated. Imagine you have a database of 10 million MP3s. How would you ever know what to listen to in that database? You could always listen to music you already know, but that might get boring. You could "shuffle play" but that would be maddening, given the variety of music you'd find in 10 million songs. What we're doing is trying to say what makes two songs similar. This is a slippery concept because music similarity is (a) somewhat user dependant; my idea of similar is not the same as yours and (b) changes with context; my idea of similar changes when I make a playlist for jogging versus making a playlist for a dinner party. I might choose the same song (say "Taxman" by the Beatles) but perhaps it would be the tempo for jogging that drove the selection of that specific song versus "I like the album Revolver and want to add it to the dinner party mix" for a dinner party playlist. If you're really intersted in this topic see a video of me (little ol' me ;-) giving a tech talk at Google

Another topic is that of music performance. In short: what makes a musical performance differ from it's score? Why do we bother to pay professional musicians to play music in the first place? Imagine you have a computer-driven robot that is capable of sitting down at a piano and playing it with perfect dexterity. That is, there is no limit to the kinds of movements the computer would tell the robot to play. The challenge becomes: what *should* the robot play? I.e. how does the robot interpret the score? Now imagine we give the robot a score of a Chopin etude. What, in fact, would the robot need as prior knowledge in order to play that score "well". By well I mean, at the very least, expressively... "with feeling"... and probably "in the style that we play Chopin". But what does that mean? Can we quantify it? For me the question is essentially a machine learning question: If we showed the robot a huge collection of Chopin performances--- from the best in the world all the way down to that of a struggling teenage pianist---could that comptuer-robot learn to play well by simply listening to them all and analyzing them? If so, what kind of analysis algorithms would we build into the robot?

I attached two mp3 files of the same piece of music, a Chopin etude. The first deadpan.mp3 is made directly from the score (stored as MIDI) with no expressive timing or dynamics added. The other expressive.mp3 is made from a piano performance (a real pianist) that was stored as MIDI. Both of the MIDI files were recorded on the Boesendorfer Imperial grand piano from my Music Performance Lab. The piano is able to play itself, so to speak, using motors controlled by a computer (thus the analogy of a robot.) What's important is that the two performances were stored on the computer in the same way... the only differences between the two are what can be stored in MIDI, namely note timing, note velocity and pedaling. If you take the time to listen to both mp3s you'll probably agree that there is a lot more to hear in "expressive.mp3" than in "deadpan.mp3". My goal is to narrow that gap.

Douglas Eck