How to process a million songs in 20 minutes

The recently released Million Song Dataset (MSD), a  collaborative project between The Echo Nest and Columbia’s LabROSA is a fantastic resource for music researchers. It contains detailed acoustic and contextual data for a million songs. However, getting started with the dataset can be a bit daunting. First of all, the dataset is huge (around 300 gb) which is more than most people want to download.  Second, it is such a big dataset that processing it in a traditional fashion, one track at a time, is going to take a long time.  Even if you can process a track in 100 milliseconds, it is still going to take over a day to process all of the tracks in the dataset.  Luckily there are some techniques such as Map/Reduce that make processing big data scalable over multiple CPUs.  In this post I shall describe how we can use Amazon’s Elastic Map Reduce to easily process the million song dataset.

The Problem

From 'Creating Music by Listening' by Tristan Jehan

For this first experiment in processing the million song data set I want to do something fairly simple and yet still interesting. One easy calculation is to determine each song’s density – where the density is defined as the average number of notes or atomic sounds (called segments) per second in a song.  To calculate the density we just divide the number of segments in a song by the song’s duration.   The set of segments for a track is already calculated in the MSD. An onset detector is used to identify atomic units of sound such as individual notes, chords, drum sounds, etc.  Each segment represents a rich and complex and usually short polyphonic sound. In the above graph the audio signal (in blue) is divided into about 18 segments (marked by the red lines).  The resulting segments vary in duration.  We should expect that high density songs will have lots of activity (as an Emperor once said “too many notes”), while low density songs won’t have very much going on.   For this experiment I’ll calculate the density of all 1 million songs and find the most dense and the least dense songs.

MapReduce
A traditional approach to processing a set of tracks would be to iterate through each track, process the track, and report the result. This approach, although simple, will not scale very well as the number of tracks or the complexity of the per track calculation increases.  Luckily, a number of scalable programming models have emerged in the last decade to make tackling this type of problem more tractable. One such approach is MapReduce.

MapReduce is a programming model developed by researchers at Google  for processing and generating large data sets. With MapReduce you  specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key.  There are a number of implementations of MapReduce including the popular open sourced Hadoop and Amazon’s Elastic MapReduce.

There’s a nifty MapReduce Python library developed by the folks at Yelp called mrjob.  With mrjob you can write a MapReduce task in Python and run it as a standalone app while you test and debug it. When your mrjob is ready, you can then launch it on a Hadoop cluster (if you have one), or run the job on 10s or even 100s of CPUs using Amazon’s Elastic MapReduce.   Writing an mrjob MapReduce task couldn’t be easier.  Here’s the classic word counter example written with mrjob:

from mrjob.job import MRJob

class MRWordCounter(MRJob):
    def mapper(self, key, line):
        for word in line.split():
            yield word, 1

    def reducer(self, word, occurrences):
        yield word, sum(occurrences)

if __name__ == '__main__':
    MRWordCounter.run()

The input is presented to the mapper function, one line at a time. The mapper breaks the line into a set of words and emits a word count of 1 for each word that it finds.  The reducer is called with a list of the emitted counts for each word, it sums up the counts and emits them.

When you run your job in standalone mode, it runs in a single thread, but when you run it on Hadoop or Amazon (which you can do by adding a few command-line switches), the job is spread out over all of the available CPUs.

MapReduce job to calculate density
We can calculate the density of each track with this very simple mrjob – in fact, we don’t even need a reducer step:

class MRDensity(MRJob):
    """ A  map-reduce job that calculates the density """

    def mapper(self, _, line):
        """ The mapper loads a track and yields its density """
        t = track.load_track(line)
        if t:
            if t['tempo'] > 0:
                density = len(t['segments']) / t['duration']
                yield (t['artist_name'], t['title'], t['song_id']), density

(see the full code on github)

The mapper loads a line and parses it into a track dictionary (more on this in a bit), and if we have a good track that has a tempo then we calculate the density by dividing the number of segments by the song’s duration.

Parsing the Million Song Dataset
We want to be able to process the MSD with code running on Amazon’s Elastic MapReduce.   Since the easiest way to get data to Elastic MapReduce is via Amazon’s Simple Storage Service (S3), we’ve loaded the entire MSD into a single S3 bucket at http://tbmmsd.s3.amazonaws.com/.  (The ‘tbm’ stands for Thierry Bertin-Mahieux, the man behind the MSD).  This bucket contains around 300 files each with data on about 3,000 tracks.  Each file is formatted with one track per line following the format described in the MSD field list.   You can see a small subset of this data for  just 20 tracks in this file on github: tiny.dat.   I’ve written track.py  that will parse this track data and return a dictionary containing all the data.

You are welcome to use this S3 version of the MSD for your Elastic MapReduce experiments.  But note that we are making the S3 bucket containing the MSD available as an experiment.  If you run your MapReduce jobs in the “US Standard Region” of Amazon, it should cost us little or no money to make this S3 data available.  If you want to download the MSD, please don’t download it from the S3 bucket, instead go to one of the other sources of MSD data such as Infochimps.  We’ll keep the S3 MSD data live as long as people don’t abuse it.

Running the Density MapReduce job

You can run the density MapReduce job on a local file to make sure that it works:

  % python density.py tiny.dat

This creates output like this:

["Planet P Project", "Pink World", "SOIAZJW12AB01853F1"]	3.3800521773317689
["Gleave", "Come With Me", "SOKBZHG12A81C21426"]	7.0173630509232234
["Chokebore", "Popular Modern Themes", "SOGVJUR12A8C13485C"]	2.7012807851495166
["Casual", "I Didn't Mean To", "SOMZWCG12A8C13C480"]	4.4351713380683542
["Minni the Moocher", "Rosi_ das M\u00e4dchen aus dem Chat", "SODFMEL12AC4689D8C"]	3.7249476012698159
["Rated R", "Keepin It Real (Skit)", "SOMJBYD12A6D4F8557"]	4.1905674943168156
["F.L.Y. (Fast Life Yungstaz)", "Bands", "SOYKDDB12AB017EA7A"]	4.2953929132587785

Where each ‘yield’ from the mapper is represented by a single line in the output, showing the track ID info and the calculated density.

Running on Amazon’s Elastic MapReduce

When you are ready to run the job on a million songs, you can run it the on Elastic Map Reduce.  First you will need to  set up your AWS system. To get setup for Elastic MapReduce follow these steps:

Once you’ve set things up, you can run your job on Amazon using the entire MSD as input by adding a few command switches like so:

 % python density.py --num-ec2-instances 100 --python-archive t.tar.gz -r emr 's3://tbmmsd/*.tsv.*' > out.dat

The ‘-r emr’ says to run the job on Elastic Map Reduce, and the ‘–num-ec2-instances 100’ says to run the job on 100 small EC2 instances.  A small instance currently costs about  ten cents an hour billed in one hour increments, so this job will cost about $10 to run if it finishes in less than an hour, and in fact this job takes about 20 minutes to run.  If you run it on only 10 instances it will cost 1 or 2 dollars. Note that the t.tar.gz file simply contains any supporting python code needed to run the job. In this case it contains the file track.py.  See the mrjob docs for all the details on running your job on EC2.

The Results
The output of this job is a million calculated densities, one for each track in the MSD.  We can sort this data to find the most and least dense tracks in the dataset.  Here are some high density examples:

Ichigo Ichie by Ryuji Takeuchi has a density of  9.2 segments/second

Ichigo Ichie by Ryuji Takeuchi


129 by  Strojovna 07 has a density of  9.2 segments/second

129 by Strojovna 07


The Feeding Circle by Makaton with a density of 9.1 segments per segment

The Feeding Circle by Makaton


Indeed, these pass the audio test, they are indeed high density tracks.  Now lets look at some of the lowest density tracks.

Deviation by Biosphere with a density of  .014 segments per second

Deviation by Biosphere


The Wire IV by Alvin Lucier with a density of 0.014 segments per second

The Wire IV by Alvin Lucier


improvisiation_122904b by Richard Chartier with a density of .02 segments per second

improvisation by Richard Chartier


Wrapping up
The ‘density’ MapReduce task is about as simple a task for processing the MSD that you’ll find.  Consider this the ‘hello, world’ of the MSD.  Over the next few weeks, I’ll be creating some more complex and hopefully interesting tasks that show some of the really interesting knowledge about music that can be gleaned from the MSD.

(Thanks to Thierry Bertin-Mahieux for his work in creating the MSD and setting up the S3 buckets. Thanks to 7Digital for providing the audio samples)

, ,

  1. #1 by Eugenio Tacchini on September 4, 2011 - 6:28 pm

    Very interesting approach, thanks for sharing!

  2. #2 by ashwiniyer on September 6, 2011 - 10:15 am

    Why Do I get the feeling that spotify has a similar technology in play?

  3. #3 by Arthur Ice on September 6, 2011 - 4:36 pm

    Google popularized map reduce, they did not invent it.

  4. #4 by Twirrim on September 7, 2011 - 2:02 am

    Arthur: Not true, Jeffrey Dean and Sanjay Ghemawat of Google were the creators of MapReduce. The concepts of mapping and reducing weren’t novel, but MapReduce is the name of Google’s implementation of the process, done in a way to suit clustering on their scale. You can read their paper here: http://static.googleusercontent.com/external_content/untrusted_dlcp/labs.google.com/en/us/papers/mapreduce-osdi04.pdf

  5. #5 by test on September 7, 2011 - 4:34 pm

    I think the difference between musical notes and drum hits should be made here—those “dense” tracks are just a bunch of repetitive drum hits.

  6. #6 by tanjk on September 8, 2011 - 11:48 am

    drums play notes.

  7. #7 by Thierry BM on October 22, 2011 - 10:03 pm

    Fun fact: if you want to run it on ~500 machines to impress your ISMIR colleagues but you’re an AWS newbie, you’re limited to 20 EC2 instances; still cool, but less “pazzaz”. Also AWS EMR takes about 3 min to actually start.