Posts Tagged analyze
Edith Law (of TagATune fame) and Olivier Gillet have put together one of the most complete MIR research datasets since uspop2002. The data (with the best name ever) is called magnatagatune. It contains:
- Human annotations collected by Edith Law’s TagATune game.
- The corresponding sound clips from magnatune.com, encoded in 16 kHz, 32kbps, mono mp3. (generously contributed by John Buckman, the founder of every MIR researcher’s favorite label Magnatune)
- A detailed analysis from The Echo Nest of the track’s structure and musical content, including rhythm, pitch and timbre.
- All the source code for generating the dataset distribution
Some detailed stats of the data calculated by Olivier are:
- clips: 25863
- source mp3: 5405
- albums: 446
- artists: 230
- unique tags: 188
- similarity triples: 533
- votes for the similarity judgments: 7650
This dataset is one stop shopping for all sorts of MIR related tasks including:
- Artist Identification
- Genre classification
- Mood Classification
- Instrument identification
- Music Similarity
- Automatic playlist generation
As part of the dataset The Echo Nest is providing a detailed analysis of each of the 25,000+ clips. This analysis includes a description of all musical events, structures and global attributes, such as key, loudness, time signature, tempo, beats, sections, and harmony. This is the same information that is provided by our track level API that is described here: developer.echonest.com.
Note that Olivier and Edith mention me by name in their release announcement, but really I was just the go between. Tristan (one of the co-founders of The Echo Nest) did the analysis and The Echo Nest compute infrastructure got it done fast (our analysis of the 25,000 tracks took much less time than it did to download the audio).
I expect to see this dataset become one of the oft-cited datasets of MIR researchers.
Here’s the official announcement:
Edith Law, John Buckman, Paul Lamere and myself are proud to announce the release of the Magnatagatune dataset.
This dataset consists of ~25000 29s long music clips, each of them annotated with a combination of 188 tags. The annotations have been collected through Edith’s “TagATune” game (http://www.gwap.com/gwap/gamesPreview/tagatune/). The clips are excerpts of songs published by Magnatune.com – and John from Magnatune has approved the release of the audio clips for research purposes. For those of you who are not happy with the quality of the clips (mono, 16 kHz, 32kbps), we also provide scripts to fetch the mp3s and cut them to recreate the collection. Wait… there’s more! Paul Lamere from The Echo Nest has provided, for each of these songs, an “analysis” XML file containing timbre, rhythm and harmonic-content related features.
The dataset also contains a smaller set of annotations for music similarity: given a triple of songs (A, B, C), how many players have flagged the song A, B or C as most different from the others.
Everything is distributed freely under a Creative Commons Attribution – Noncommercial-Share Alike 3.0 license ; and is available here: http://tagatune.org/Datasets.html
This dataset is ever-growing, as more users play TagATune, more annotations will be collected, and new snapshots of the data will be released in the future. A new version of TagATune will indeed be up by next Monday (April 6). To make this dataset grow even faster, please go to http://www.gwap.com/gwap/gamesPreview/tagatune/ next Monday and start playing.
The Magnatagatune team
Welcome to Music Machinery - the blog about the interface of music and technology written by Paul Lamere.
- generative music
- music hack day
- music information retrieval
- The Echo Nest
- web services
- zero ui