Archive for category tags

The BPM Explorer

Last month I wrote about using the Echo Nest API to analyze tracks to generate plots that you can use to determine whether or not a machine is responsible for setting the beat of a song.   I received many requests to analyze tracks by particular  artists, far too many for me to do without giving up my day job.   To satisfy this pent up demand for click track analysis I’ve written an application called the BPM Explorer that you let you create your own click plots.  With this application you can analyze any song in your collection, view its click plot and listen to your music, synchronized with the plot.  Here’s what the app looks like:

Check out the application here:  The Echo Nest BPM Explorer.  It’s written in Processing and deployed with Java Webstart, so it (should) just work.

My primary motiviation for writing this application was to check out the new Echo Nest Java Client to make sure that it was easy to use from Processing.   One of my secret plans is to get people in the Processing community interested in using the Echo Nest API.  The Processing community is filled with some  ultra-creative folks that have have strong artistic, programming and data visualization skills.   I’d love to see more song visualizations like this and this that are built using the Echo Nest APIs.  Processing is really cool – I was able to write the BPM explorer in just a few hours (it took me longer to remember how to sign jar files for webstart than it did to write the core plotter).    Processing strips away all of the boring parts of writing graphic programming (create a frame,  lay it out with a gridbag, make it visible,  validate, invalidate, repaint, paint arghh!). For processing, you just write a method ‘draw()’ that will be called 30 times a second.   I hope I get the chance to write more Processing programs.

Update: I’ve released the BPM Explorer code as open source – as part of the echo-nest-demos project hosted at google-code.  You can also browse the read  for the BPM Explorer.

, , , ,

11 Comments

Music discovery is a conversation not a dictatorship

Two big problems with music recommenders: (1) They can’t tell me why they recommended something beyond the trivial “People who liked X also liked Y” and (2) If I want to interact with a recommender I’m all thumbs – I can usually give a recommendation a  thumbs up or thumbs down  but there is no way to use steer the recommender (“no more emo please!”).   These problems are addressed in The Music Explaura – a web-based music exploration tool just released by Sun Labs.

The Music Explaura lets you explore the world of music artists.  If gives you all the context you  need – audio, artist bio, videos, photos, discographies to help you decided whether or not a particular artist is interesting.  The Explaura also gives you similar-artist style recommendations. For any artist, you are given a list of similar artists for you to explore.  The neat thing is, for any recommended artist, you can ask  why that artist was recommended and the Explaura will give you an explanation of why that artist was recommended (in the form of an overlapping tag cloud).

The really cool bit (and this is the knock your socks off type of cool) is that you can use an artists tag cloud to steer the recommender.  If you like Jimi Hendrix, but want to find artists that are similar but less psychedelic and more blues oriented,  you can just grab the ‘psychedelic’ tag with your mouse and shrink it and grab the ‘blues’ tag and make it bigger – you’ll instantly get an updated set of artists that are  more like Cream and less like The Doors.

I strongly suggest that you go and play with the Explaura – it lets you take an active role in music exploration.  What’s a band that is an emo version of Led Zeppelin? Blue Öyster Cult of course!  What’s an artist like Britney Spears but with an indie vibe? Katy Perry!    How about a   band like The Beatles but  recording in this decade? Try The Coral. A  band like Metallica  but with a female vocalists?  Try Kittie.  Was there anything like emo in the 60s? Try Leonard Cohen.  The interactive nature of the Explaura makes it quite addicting.  I can get lost for hours exploring some previously unknown corner of the music world.

Steve (the search guy) has a great post describing the Music Explaura in detail.  One thing he doesn’t describe is the backend system architecture.  Behind the Music Explaura is a distributed  data store with item search and and similarity capabilities built into the core. This makes  scaling the system up to millions of items with requests from thousands of simultaneous users possible.  It really is a nice system.   (Full disclosure here:  I spent the last several years working on this project – so naturally I think it is pretty cool).

The Music Explaura gives us a hint of what music discovery will be like in the future.  Instead of a world where a music vendor gives you a static list of recommended artists  we’ll live in a world  where the recommender can tell you why it is recommending an item, and you can respond by steering the recommendations away from things you don’t like and toward the things that you do like.   Music discovery will no longer be a dictatorship, it will be a two-way conversation.

Leave a comment

Magnatagatune – a new research data set for MIR

magnatagatune

Edith Law (of TagATune fame) and Olivier Gillet have put together one of the most complete MIR research datasets since uspop2002.   The data (with the best name ever) is called magnatagatune.  It  contains:

  • Human annotations collected by Edith Law’s TagATune game.
  • The corresponding sound clips from magnatune.com, encoded in 16 kHz, 32kbps, mono mp3.  (generously contributed by John Buckman, the founder of every MIR researcher’s favorite label Magnatune)
  • A detailed analysis from The Echo Nest of the track’s structure and musical content, including rhythm, pitch and timbre.
  • All the source code for generating the dataset distribution

Some detailed stats of the data calculated by Olivier are:

  • clips: 25863
  • source mp3: 5405
  • albums: 446
  • artists: 230
  • unique tags: 188
  • similarity triples: 533
  • votes for the similarity judgments: 7650

This dataset is one stop shopping for all sorts of MIR related tasks including:

  • Artist Identification
  • Genre classification
  • Mood Classification
  • Instrument identification
  • Music Similarity
  • Autotagging
  • Automatic playlist generation

As part of the dataset The Echo Nest is providing a detailed analysis of each of the 25,000+ clips. This analysis includes a  description of all musical events, structures and global attributes, such as key, loudness, time signature, tempo, beats, sections, and harmony.  This is the same information that is provided by our track level API that is described here: developer.echonest.com.

Note that Olivier and Edith mention me by name in their release announcement, but really I was just the go between. Tristan (one of the co-founders of The Echo Nest)  did the analysis and The Echo Nest compute infrastructure got it done fast (our analysis of the 25,000 tracks took much less time than it did to download the audio).

I expect to see this dataset become one of the oft-cited datasets of MIR researchers.

Here’s the official announcement:

Edith Law, John Buckman, Paul Lamere and myself are proud to announce the release of the Magnatagatune dataset.

This dataset consists of ~25000 29s long music clips, each of them annotated with a combination of 188 tags. The annotations have been collected through Edith’s “TagATune” game (http://www.gwap.com/gwap/gamesPreview/tagatune/). The clips are excerpts of songs published by Magnatune.com – and John from Magnatune has approved the release of the audio clips for research purposes. For those of you who are not happy with the quality of the clips (mono, 16 kHz, 32kbps), we also provide scripts to fetch the mp3s and cut them to recreate the collection. Wait… there’s more! Paul Lamere from The Echo Nest has provided, for each of these songs, an “analysis” XML file containing timbre, rhythm and harmonic-content related features.

The dataset also contains a smaller set of annotations for music similarity: given a triple of songs (A, B, C), how many players have flagged the song A, B or C as most different from the others.

Everything is distributed freely under a Creative Commons Attribution – Noncommercial-Share Alike 3.0 license ; and is available here: http://tagatune.org/Datasets.html

This dataset is ever-growing, as more users play TagATune, more annotations will be collected, and new snapshots of the data will be released in the future. A new version of TagATune will indeed be up by next Monday (April 6). To make this dataset grow even faster, please go to http://www.gwap.com/gwap/gamesPreview/tagatune/ next Monday and start playing.

Enjoy!
The Magnatagatune team

, , , ,

Leave a comment

Using Visualizations for Music Discovery

As Ben pointed out last week, the ISMIR  site has posted the tutorial schedule for ISMIR 2009.  I’m happy to see that the tutorial that Justin Donaldson and I proposed was accepted.  Our tutorial is called Using Visualizations for Music Discovery.  Here’s the abstract:

As the world of online music grows, tools for helping people find new and interesting music in these extremely large collections become increasingly important.  In this tutorial we look at one such tool that can be used to help people explore large music collections: data visualization.  We survey the state-of-the-art in visualization for music discovery in commercial and research systems. Using numerous examples, we explore different algorithms and techniques that can be used to visualize large and complex music spaces, focusing on the advantages and the disadvantages of the various techniques.   We investigate user factors that affect the usefulness of a visualization and we suggest possible areas of exploration for future research.

I’m excited about this tutorial – mainly because I get to work with Justin on it. He’s a really smart guy who really knows the state-of-the-art in visualizations. I’ll just be tagging along for the ride.

Detail from Genealog of Pop/Rock Music

Detail from Genealogy of Pop/Rock Music

We  are in the survey phase of talk preparation now. We’ve been gathing info on various types of visualizations and tagging them with the delicious tag MusicVizIsmir2009.  Feel free to tag along (pun intended) with us and tag items that you encounter that you feel may be particularly interesting, unique or salient.

,

6 Comments

Making boring homework fun

JFs Pop Artist slide

JF's Pop Artist slide

One of the cool things about working at Sun Labs is all of the very smart interns that come and work for a summer. They are invariably creative, and bring  many new ideas to the labs.   One intern I worked with a few years back, Jean-Francois, just posted a blog entry about how he wanted to create a slide for a presentation that showed the pop artists from Sweden as a word cloud.  Now most of us would have just typed in a few artist names and resized the fonts based on what we though was the approximate popularity of the particular artist.  But not Jean Francois.  He turned this slide into its own little research project – first trying to scour the popularity data from the Wikipedia, then mining Google search results and finally settling on using the last.fm webservices to get the listener data.

JF took a rather boring assignment – make an oral presentation on Sweden – and made it a learning experience for something that was interesting to him.  I am not sure how much he learned about Sweden, but he certainly learned something about web mining, artist name resolution and ambiguity,  and  using web services.  I wonder if Jean Francois’s International Communication’s professor understood the depths of detail that JF went to get the information on that one slide correct.

Anyway, it is a cool post from a JF.

Leave a comment