Removing accents in artist names

If you write software for music applications, then you understand the difficulties in dealing with matching artist names.   There are lots of issues: spelling errors, stop words (‘the beatles’ vs. ‘beatles, the’ vs ‘beatles’),  punctuation (is it “Emerson, Lake and Palmer” or “Emerson, Lake & Palmer“),  common aliases (ELP, GNR, CSNY, Zep), to name just  a few of the issues. One common problem is dealing with international characters.   Most Americans don’t know how to type accented characters on their keyboards so when they are looking for Beyoncé they will type ‘beyonce’.    If you want your application to find the proper artist for these queries you are going to have deal with these missing accents in the query.  One way to do this is to extend the artist name matching to include a check against a version of the artist name where all of the accents have been removed.   However, this is not so easy to do –  You could certainly  build  a mapping table of all the possible accented characters, but that is prone to failure. You may neglect  some obscure character mapping  (like that funny ř in Antonín Dvořák).

Luckily, in Java 1.6 there’s a pretty reliable way to do this.  Java 1.6 added a Normalizer class to the java. text package.  The Normalize class allows you to apply Unicode Normalization to strings.  In particular you can apply Unicode decomposition that will replace any precomposed character into a base character and the combining accent.  Once you do this, its a simple string replace to get rid of the accents.  Here’s a bit of code to remove accents:

    public static String removeAccents(String text) {
          return Normalizer.normalize(text, Normalizer.Form.NFD)
               .replaceAll("\\p{InCombiningDiacriticalMarks}+", "");
    }

This is nice and straightforward code, and has no effect on strings that have no accents.

Of course ‘removeAccents’ doesn’t solve all of the problems – it certainly won’t help you deal with artist names like ‘KoЯn’ nor will it deal with the wide range of artist name misspellings.   If you are trying to deal normalizing aritist names   you should read  how Columbia researcher Dan Ellis has approached the problem.   I suspect that someday,   (soon, I hope) there will be a magic music web service  that will solve this problem once and for all and you”ll never again have to scratch our head at why you are listening to a song by Peter, Bjork and John, instead of a song by Björk.

, ,

4 Comments

The BPM Explorer

Last month I wrote about using the Echo Nest API to analyze tracks to generate plots that you can use to determine whether or not a machine is responsible for setting the beat of a song.   I received many requests to analyze tracks by particular  artists, far too many for me to do without giving up my day job.   To satisfy this pent up demand for click track analysis I’ve written an application called the BPM Explorer that you let you create your own click plots.  With this application you can analyze any song in your collection, view its click plot and listen to your music, synchronized with the plot.  Here’s what the app looks like:

Check out the application here:  The Echo Nest BPM Explorer.  It’s written in Processing and deployed with Java Webstart, so it (should) just work.

My primary motiviation for writing this application was to check out the new Echo Nest Java Client to make sure that it was easy to use from Processing.   One of my secret plans is to get people in the Processing community interested in using the Echo Nest API.  The Processing community is filled with some  ultra-creative folks that have have strong artistic, programming and data visualization skills.   I’d love to see more song visualizations like this and this that are built using the Echo Nest APIs.  Processing is really cool – I was able to write the BPM explorer in just a few hours (it took me longer to remember how to sign jar files for webstart than it did to write the core plotter).    Processing strips away all of the boring parts of writing graphic programming (create a frame,  lay it out with a gridbag, make it visible,  validate, invalidate, repaint, paint arghh!). For processing, you just write a method ‘draw()’ that will be called 30 times a second.   I hope I get the chance to write more Processing programs.

Update: I’ve released the BPM Explorer code as open source – as part of the echo-nest-demos project hosted at google-code.  You can also browse the read  for the BPM Explorer.

, , , ,

11 Comments

New Java Client for the Echo Nest API

Today we are releasing a Java client library for the Echo Nest developer API.   This  library gives the Java programmer  full access to the Echo Nest developer API. The API includes artist-level methods such as getting artist news, reviews, blogs, audio, video, links,  familiarity, hotttnesss, similar artists, and so on.  The API also includes access to the renown track analysis API that will allow you to get a detailed musical analysis of any music track. This analysis includes loudness, mode, key, tempo, time signature, detailed beat structure, harmonic content, and timbre information for a track.

To use the API you need to get an Echo Nest developer key (it’s free) from developer.echonest.com.   Here are some code samples:

// a quick and dirty audio search engine
   ArtistAPI artistAPI = new ArtistAPI(MY_ECHO_NEST_API_KEY);

   List<Artist> artists = artistAPI.searchArtist("The Decemberists", false);
   for (Artist artist : artists) {
       DocumentList<Audio> audioList = artistAPI.getAudio(artist, 0, 15);
       for (Audio audio : audioList.getDocuments()) {
          System.out.println(audio.toString())
       }
   }
// find similar artists for weezer
   ArtistAPI artistAPI = new ArtistAPI(MY_ECHO_NEST_API_KEY);
   List<Artist> artists = artistAPI.searchArtist("weezer", false);
    for (Artist artist : artists) {
          List<Scored<Artist>> similars = artistAPI.getSimilarArtists(artist, 0, 10);
          for (Scored<Artist> simArtist : similars) {
              System.out.println("   " + simArtist.getItem().getName());
          }
     }

// Find the tempo of a track
    TrackAPI trackAPI = new TrackAPI(MY_ECHO_NEST_API_KEY);
    String id = trackAPI.uploadTrack(new File("/path/to/music/track.mp3"), false);
    AnalysisStatus status = trackAPI.waitForAnalysis(id, 60000);
    if (status == AnalysisStatus.COMPLETE) {
       System.out.println("Tempo in BPM: " + trackAPI.getTempo(id));
    }

There are some nifty bits in the API.  The API will cache data for you so frequently requested data (everyone wants the latest news about Cher) will be served up very quickly. The cache can be persisted, and the shelf-life for data in the cache can be set programmatically (the default age is one week).  The API will (optionally) schedule requests to ensure that you don’t exceed your call limit.   For those that like to look under the hood, you can turn on tracing to see what the method URL calls look like and see what the returned XML looks like.

If you are interested in kicking the tires of the Echo Nest API and you are a Java or Processing programmer, give the API a try.

If you have any questions / comments or problems abut the API  post to the Echo Nest Forums.

, ,

1 Comment

track upload sample code

One of the biggest pain points users have with the Echo Nest developer API is with the track upload method.  This method lets  you upload a track for analysis (which can be subsequently retrieved by a number of other API method calls such as get_beats, get_key, get_loudness and so on).   The track upload, unlike all of the other of The Echo Nest methods requires you to construct a multipart/form-data post request. Since I get a lot of questions about track upload I decided that I needed to actually code my own to get a full understanding of how to do it – so that (1) I could answer detailed questions about the process and (2) point to my code as an example of how to do it.   I could have used a library (such as the Jakarta http client library) to do the heavy lifting but I wouldn’t have learned a thing nor would I have some code to point people at.  So I wrote some Java code (part of the forthcoming Java Client for the Echo Nest web services) that will do the upload.

You can take a look at this post method in its google-code repository. The tricky bits about the multipart/form-data post is getting the multip-part form boundaries just right.  There’s a little dance one has to do with the proper carriage returns and linefeeds, and double-dash prefixes and double-dash suffixes and random boundary strings.  Debugging can be a pain in the neck too, because if you get it wrong,  typically the only diagnostic one gets is a ‘500 error’ which means something bad happened.

Track upload can also be a pain in the neck because you need to wait 10 or 20 seconds for the track upload to finish and for the track analysis to complete.  This time can be quite problematic if you have thousands of tracks to analyze.  20 seconds * one thousand tracks is about 8 hours.  No one wants to wait that long to analyze a music collection.  However, it is possible to short circuit this analysis.  You can skip the upload entirely if we already have performed an analysis on your track of interest.   To see if an analysis of a track is already available you can perform a query such as ‘get_duration’ using the MD5 hash of the audio file.  If you get a result back then we’ve already done the analysis and you can skip the upload and just use the MD5 hash of your track as the ID for all of your queries.   With all of the apps out there using the track analysis API, (for instance, in just a week, donkdj has already analyzed over 30K tracks) our database of pre-cooked analyses is getting quite large – soon I suspect that you won’t need to perform an upload of most tracks (certainly not mainstream tracks). We will already have the data.

, , , ,

7 Comments

Music discovery is a conversation not a dictatorship

Two big problems with music recommenders: (1) They can’t tell me why they recommended something beyond the trivial “People who liked X also liked Y” and (2) If I want to interact with a recommender I’m all thumbs – I can usually give a recommendation a  thumbs up or thumbs down  but there is no way to use steer the recommender (“no more emo please!”).   These problems are addressed in The Music Explaura – a web-based music exploration tool just released by Sun Labs.

The Music Explaura lets you explore the world of music artists.  If gives you all the context you  need – audio, artist bio, videos, photos, discographies to help you decided whether or not a particular artist is interesting.  The Explaura also gives you similar-artist style recommendations. For any artist, you are given a list of similar artists for you to explore.  The neat thing is, for any recommended artist, you can ask  why that artist was recommended and the Explaura will give you an explanation of why that artist was recommended (in the form of an overlapping tag cloud).

The really cool bit (and this is the knock your socks off type of cool) is that you can use an artists tag cloud to steer the recommender.  If you like Jimi Hendrix, but want to find artists that are similar but less psychedelic and more blues oriented,  you can just grab the ‘psychedelic’ tag with your mouse and shrink it and grab the ‘blues’ tag and make it bigger – you’ll instantly get an updated set of artists that are  more like Cream and less like The Doors.

I strongly suggest that you go and play with the Explaura – it lets you take an active role in music exploration.  What’s a band that is an emo version of Led Zeppelin? Blue Öyster Cult of course!  What’s an artist like Britney Spears but with an indie vibe? Katy Perry!    How about a   band like The Beatles but  recording in this decade? Try The Coral. A  band like Metallica  but with a female vocalists?  Try Kittie.  Was there anything like emo in the 60s? Try Leonard Cohen.  The interactive nature of the Explaura makes it quite addicting.  I can get lost for hours exploring some previously unknown corner of the music world.

Steve (the search guy) has a great post describing the Music Explaura in detail.  One thing he doesn’t describe is the backend system architecture.  Behind the Music Explaura is a distributed  data store with item search and and similarity capabilities built into the core. This makes  scaling the system up to millions of items with requests from thousands of simultaneous users possible.  It really is a nice system.   (Full disclosure here:  I spent the last several years working on this project – so naturally I think it is pretty cool).

The Music Explaura gives us a hint of what music discovery will be like in the future.  Instead of a world where a music vendor gives you a static list of recommended artists  we’ll live in a world  where the recommender can tell you why it is recommending an item, and you can respond by steering the recommendations away from things you don’t like and toward the things that you do like.   Music discovery will no longer be a dictatorship, it will be a two-way conversation.

Leave a comment

Magnatagatune – a new research data set for MIR

magnatagatune

Edith Law (of TagATune fame) and Olivier Gillet have put together one of the most complete MIR research datasets since uspop2002.   The data (with the best name ever) is called magnatagatune.  It  contains:

  • Human annotations collected by Edith Law’s TagATune game.
  • The corresponding sound clips from magnatune.com, encoded in 16 kHz, 32kbps, mono mp3.  (generously contributed by John Buckman, the founder of every MIR researcher’s favorite label Magnatune)
  • A detailed analysis from The Echo Nest of the track’s structure and musical content, including rhythm, pitch and timbre.
  • All the source code for generating the dataset distribution

Some detailed stats of the data calculated by Olivier are:

  • clips: 25863
  • source mp3: 5405
  • albums: 446
  • artists: 230
  • unique tags: 188
  • similarity triples: 533
  • votes for the similarity judgments: 7650

This dataset is one stop shopping for all sorts of MIR related tasks including:

  • Artist Identification
  • Genre classification
  • Mood Classification
  • Instrument identification
  • Music Similarity
  • Autotagging
  • Automatic playlist generation

As part of the dataset The Echo Nest is providing a detailed analysis of each of the 25,000+ clips. This analysis includes a  description of all musical events, structures and global attributes, such as key, loudness, time signature, tempo, beats, sections, and harmony.  This is the same information that is provided by our track level API that is described here: developer.echonest.com.

Note that Olivier and Edith mention me by name in their release announcement, but really I was just the go between. Tristan (one of the co-founders of The Echo Nest)  did the analysis and The Echo Nest compute infrastructure got it done fast (our analysis of the 25,000 tracks took much less time than it did to download the audio).

I expect to see this dataset become one of the oft-cited datasets of MIR researchers.

Here’s the official announcement:

Edith Law, John Buckman, Paul Lamere and myself are proud to announce the release of the Magnatagatune dataset.

This dataset consists of ~25000 29s long music clips, each of them annotated with a combination of 188 tags. The annotations have been collected through Edith’s “TagATune” game (http://www.gwap.com/gwap/gamesPreview/tagatune/). The clips are excerpts of songs published by Magnatune.com – and John from Magnatune has approved the release of the audio clips for research purposes. For those of you who are not happy with the quality of the clips (mono, 16 kHz, 32kbps), we also provide scripts to fetch the mp3s and cut them to recreate the collection. Wait… there’s more! Paul Lamere from The Echo Nest has provided, for each of these songs, an “analysis” XML file containing timbre, rhythm and harmonic-content related features.

The dataset also contains a smaller set of annotations for music similarity: given a triple of songs (A, B, C), how many players have flagged the song A, B or C as most different from the others.

Everything is distributed freely under a Creative Commons Attribution – Noncommercial-Share Alike 3.0 license ; and is available here: http://tagatune.org/Datasets.html

This dataset is ever-growing, as more users play TagATune, more annotations will be collected, and new snapshots of the data will be released in the future. A new version of TagATune will indeed be up by next Monday (April 6). To make this dataset grow even faster, please go to http://www.gwap.com/gwap/gamesPreview/tagatune/ next Monday and start playing.

Enjoy!
The Magnatagatune team

, , , ,

Leave a comment

Killer music technology

We’ve been head down here at the Echo Nest putting the finishing touches on what I think is a game changer for music discovery.   For years, music recommendation companies have been trying to get collaborative filtering technologies to work.  These CF systems work pretty well, but sooner or later, you’ll get a bad recommendation. There are just too many ways for a CF recommender to fail.   Here at the ‘nest we’ve decided to take a completely different approach.  Instead of recommending music based on the wisdom of the crowds or based upon what your friends are listening to, we are going to recommend music just based on whether or not the music is good.   This is such an obvious idea –  recommend music that is good, and don’t recommend music that is bad – that it is a puzzle as to why this approach hasn’t been taken before.  Of course deciding which music is good and which music is bad can be problematic. But the scientists here at The Echo Nest have spent years building machine learning technologies so that we can essentially reproduce the thought process of a Pitchfork music critic. Think of this technology  as Pitchfork-in-a-box.

Our implementation is quite simple. We’ve added a single API method ‘get_goodness’ to our set of developer offerings.  You give this method an Echo Nest artist ID (that you can obtain via an artist search call) and get_goodness returns a number between zero and one that indicates how good or bad the artist is.    Here’s an example call for radiohead:

http://developer.echonest.com/api/get_goodness?api_key=EHY4JJEGIOFA1RCJP&id=music://id.echonest.com/~/AR/ARH6W4X1187B99274F&version=3

The results are:

<response version="3">
   <status>
     <code>0</code>
      <message>Success</message>
   </status>
   <query>
    <parameter name="api_key">EHY4JJEGIOFA1RCJP</parameter>
    <parameter name="id">music://id.echonest.com/~/AR/ARH6W4X1187B99274F</parameter>
  </query>
  <artist>
    <name>Radiohead</name>
    <id>music://id.echonest.com/~/AR/ARH6W4X1187B99274F</id>
    <goodness>0.47</goodness>
    <instant_critic>More enjoyable than Kanye bitching.</instant_critic>
  </artist>
</response>

We also include in the response, a text string that indicates how you should feel about this artist.  This is just the tip of the iceberg for our forthcoming automatic music review technology that will generate blog entries, amazon reviews, wikipedia descriptions and press releases automatically, just based upon the audio.

We’ve made a web demo of this technology that will allow you try out the goodness API. Check it out at:  demo.echonest.com.

We’ve had lots of late nights in the last few weeks, but now that this baby is launched, time to celebrate (briefly) and then on to the next killer music tech!

25 Comments

Something is fishy with this recommender

kingsleon08In my rather long winded post on the problems with current music recommenders, I pointed out the Harry Potter Effect.  Collaborative Filtering recommenders tend to recommend things that are popular which makes those items even more popular, creating a feedback loop – (or as the podcomplex calls it – the similarity vortex) that results in certain items becoming extremely popular at the expense of overall diversity.  (For an interesting demonstration of this effect, see ‘Online monoculture and the end of the niche‘).

Oscar sent me an example of this effect.   At the popular British online music store  HMV,  a rather large fraction of  artists recommendations  point to the Kings of Leon. Some examples:

Oscar points out that even for albums that haven’t been released, HMV will tell you that ‘customers that bought the new unreleased album by Depeche Mode also bought the Kings of Leon’.   Of course it is no surprise that if you look at the HMV bestsellers, The Kings Of Leon is way up there at position #3.

At first blush, this does indeed look like a classic example of the Harry Potter Effect, but I’m a bit suspicious that what we are seeing is not an example of a feedback loop, but is an example of  shilling – using the recommender to explicitly promote a particular item.    It may be that HMV has decided to hardwire a slot in their ‘customers who bought this also bought’ section to point to an item that they are trying to promote – perhaps due to a sweetheart deal with a music label.  I don’t have any hard evidence of this, but when you look at the wide variety of artists that point to Kings of Leon – from Miley Cyrus, to Led Zeppelin and Nirvana it is hard to imagine that this is a result of natural collaborative filtering.    Music promotion that disguises itself as music recommendation has been around for about as long as there have been people looking for new music. Payola schemes have dogged radio for decades.  It is not hard to believe that this type of dishonest marketing will find its way into recommender systems. We’ve already seen the rise of ‘search engine optimization’ companies that will get your web site on the first page of google search results – it won’t be long before we see a recommender engine optimizer industry that will promote your items by manipulating recommenders.  It may already be happening now, and we just don’t know about it.  The next time you get a recommendation for The Kings of Leon because you like The Rolling Stones, ask yourself if this is a real and honest recommendation or are they just trying to sell you something.

5 Comments

Using Visualizations for Music Discovery

As Ben pointed out last week, the ISMIR  site has posted the tutorial schedule for ISMIR 2009.  I’m happy to see that the tutorial that Justin Donaldson and I proposed was accepted.  Our tutorial is called Using Visualizations for Music Discovery.  Here’s the abstract:

As the world of online music grows, tools for helping people find new and interesting music in these extremely large collections become increasingly important.  In this tutorial we look at one such tool that can be used to help people explore large music collections: data visualization.  We survey the state-of-the-art in visualization for music discovery in commercial and research systems. Using numerous examples, we explore different algorithms and techniques that can be used to visualize large and complex music spaces, focusing on the advantages and the disadvantages of the various techniques.   We investigate user factors that affect the usefulness of a visualization and we suggest possible areas of exploration for future research.

I’m excited about this tutorial – mainly because I get to work with Justin on it. He’s a really smart guy who really knows the state-of-the-art in visualizations. I’ll just be tagging along for the ride.

Detail from Genealog of Pop/Rock Music

Detail from Genealogy of Pop/Rock Music

We  are in the survey phase of talk preparation now. We’ve been gathing info on various types of visualizations and tagging them with the delicious tag MusicVizIsmir2009.  Feel free to tag along (pun intended) with us and tag items that you encounter that you feel may be particularly interesting, unique or salient.

,

6 Comments

Help! My iPod thinks I’m emo – Part 1

sxsw-ipod-emo-v6001

At SXSW 2009, Anthony Volodkin and I presented a panel on music recommendation called “Help! My iPod thinks I’m emo”. Anthony and I share very different views on music recommendation. You can read Anthony’s notes for this session at his blog: Notes from the “Help! My iPod Thinks I’m Emo!” panel. This is Part 1 of my notes – and my viewpoints on music recommendation.  (Note that even though I work for The Echo Nest, my views may not necessarily be the same as my employer).

The SXSW audience is a technical audience to be sure, but they are not as immersed in recommender technology as regular readers of MusicMachinery, so this talk does not dive down into hard core tech issues, instead it is a lofty overview of some of the problems and potential solutions for music recommendation.  So lets get to it.

Music Recommendation is Broken.

sxsw-ipod-emo-v6002

Even though Anthony and I disagree about a number of things, one thing that we do agree on is that music recommendation is broken in some rather fundamental ways. For example, this slide shows a recommendation from iTunes (from a few years back). iTunes suggests that if I like Britney Spears’ “Hit Me Baby One more time” that I might also like the “Report on Pre-War Intelligence for the Iraq war”.
Clearly this is a broken recommendation – this is a recommendation no human would make. Now if you’ve spent anytime visiting music sites on the web you’ve likely seen recommendations just as bad as this. Sometimes music recommenders just get it wrong – and they get it wrong very badly. In this talk we are going to talk about how music recommenders work, why they make such dumb mistakes, and some of the ideas coming from researchers and innovators like Anthony to fix music discovery.

Why do we even care about music recommendation and discovery?

The world of music has changed dramatically. When I was growing up, a typical music store had on the order of 1,000 unique artists to chose from. Now, online music stores like iTunes have millions of unique songs to chose from. Myspace has millions of artists, and the P2P networks have billions of tracks available for download. We are drowning in a sea of music. And this is just the beginning. In a few years time the transformation to digital, online music will be complete. All recorded music will be online – every recording of every performance of every artist, whether they are a mainstream artist or a garage band or just a kid with a laptop will be uploaded to the web. There will be billions of tracks to chose from, with millions more arriving every week. With all this music to chose from, this should be a music nirvana – we should all be listening to new and interesting music.

sxsw-ipod-emo-v6004-003

With all this music, classic long tail economics apply. Without the constraints of physical space, music stores no longer need to focus on the most popular artists. There should be less of a focus on the hits and the megastars. With unlimited virtual space, we should see a flattening of the long tail – music consumption should shift to less popular artists. This is good for everyone. It is good for business – it is probably cheaper for a music store to sell a no-name artist than it is to sell the latest Miley Cyrus track. It is good for the artist – there are millions of unknown artists that deserve a bit of attention, and it is good for the listener. Listeners get to listen to a larger variety of music, that better fits their taste, as opposed to music designed and produced to appeal to the broadest demographics possible. So with the increase in available music we should see less emphasis on the hits. In the future, with all this music, our music listening should be less like Walmart and more like SXSW. But is this really happening? Lets take a look.

sxsw-ipod-emo-v6004-004

The state of music discovery

sxsw-ipod-emo-v6005

If we look at some of the data from Nielsen Soundscan 2007 we see that although there were more than 4 million tracks sold only 1% of those tracks accounted for 80% of sales. What’s worse, a whopping 13% of all sales are from American Idol or Disney Artists. Clearly we are still focusing on the hits. One must ask, what is going on here? Was Chris Anderson wrong? I really don’t think so. Anderson says that to make the long tail ‘work’ you have to do two things  (1) Make everything available and (2) Help me find it. We are certainly on the road to making everything available – soon all music will be online. But I think we are doing a bad job on step (2) help me find it. Our music recommenders are *not* helping us find music, in fact current music recommenders do the exact opposite, they tend to push us toward popular artists and limit the diversity of recommendations. Music recommendation is fundamentally broken, instead of helping us find music in the long tail they are doing the exact opposite. They are pushing us to popular content. To highlight this take a look at the next slide.

Help! I’m stuck in the head

sxsw-ipod-emo-v6006

This is a study done by Dr. Oscar Celma of MTG UPF (and now at BMAT). Oscar was interested in how far into the long tail a recommender would get you. He divided the 245,000 most popular artists into 3 sections of equal sales – the short head, with 83 artists, the mid tail with 6,659 artists, and the long tail with 239,798 artists. He looked at recommendations (top 20 similar artists) that start in the short head and found that 48% of those recommendations bring you right back to the short head. So even though there are nearly a quarter million artists to chose from, 48% of all recommendations are drawn from a pool of the 83 most popular artists. The other 52% of recommendations are drawn from the mid-tail. No recommendations at all bring you to the long tail. The nearly 240,000 artists in the long tail are not reachable directly from the short head. This demonstrates the problem with commercial recommendation – it focuses people on the popular at the expense of the new and unpopular.

Let’s take a look at why recommendation is broken.

The Wisdom of Crowds

sxsw-ipod-emo-v6008

First lets take a look at how a typical music recommender works. Most music recommenders use a technique called Collaborative Filtering (CF). This is the type of recommendation you get at Amazon where they tell you that ‘people who bought X also bought Y’. The core of a CF recommender is actually quite simple. At the heart of the recommender is typically an item-to-item similarity matrix that is used to show how similar or dissimilar items are. Here we see a tiny excerpt of such a matrix. I constructed this by looking at the listening patterns of 12,000 last.fm listeners and looking at which artists have overlapping listeners. For instance, 35% of listeners that listen to Britney Spears also listen to Evancescence, while 62% also listen to Christina Aguilera. The core of a CF recommender is such a similarity matrix constructed by looking at this listener overlap. If you like Britney Spears, from this matrix we could recommend that you might like Christana and Kelly Clarkson, and we’d recommend that you probably wouldn’t like Metallica or Lacuna Coil.

CF recommenders have a number of advantages. First, they work really well for popular artists. When there are lots of people listening to a set of artists, the overlap is a good indicator of overall preference. Secondly, CF systems are fairly easy to implement. The math is pretty straight forward and conceptually they are very easy to understand. Of course, the devil is in the details. Scaling a CF system to work with millions of artists and billions of tracks for millions of users is an engineering challenge. Still, it is no surprise that CF systems are so widely used. They give good recommendations for popular items and they are easy to understand and implement. However, there are some flaws in CF systems that ultimately makes them not suitable for long-tail music recommendation. Let’s take a look at some of the issues.

The Stupidity of Solitude

sxsw-ipod-emo-v6009

The DeBretts are a long tail artist. They are a punk band with a strong female vocalist that is reminiscent of Blondie or Patti Smith. (Be sure to listen to their song ‘The Rage’) .The DeBretts haven’t made it big yet. At last.fm they have about 200 listeners. They are a really good band and deserve to be heard. But if you went to an online music store like iTunes that uses a Collaborative Filterer to recommend music, you would *never* get a recommendation for the DeBretts. The reason is pretty obvious. The DeBretts may appeal to listeners that like Blondie, but even if all of the DeBretts listeners listen to Blondie the percentage of Blondie listeners that listen to the DeBretts is just too low. If Blondie has a million listeners then the maximum potential overlap(200/1,000,000) is way too small to drive any recommendations from Blondie to the DeBretts. The bottom line is that if you like Blondie, even though the DeBretts may be a perfect recommendation for you, you will never get this recommendation. CF systems rely on the wisdom of the crowds, but for the DeBretts, there is no crowd and without the crowd there is no wisdom. Among those that build recommender systems, this issue is called ‘the cold start’ problem. It is one of the biggest problems for CF recommenders. A CF-based recommender cannot make good recommendations for new and unpopular items.

Clearly we can see that this cold start problem is going to make it difficult for us to find new music in the long tail. The cold start problem is one of the main reasons why are recommenders are still’ stuck in the head’.

The Harry Potter Problem

sxsw-ipod-emo-v60101

This slide shows a recommendation “If you enjoy Java RMI” you many enjoy Harry Potter and the Sorcerers Stone”. Why is Harry Potter being recommended for a reader of a highly technical programming book?
Certain items, like the Harry Potter series of books, are very popular. This popularity can have an adverse affect on CF recommenders. Since popular items are purchased often they are frequently purchased with unrelated items. This can cause the recommender to associate the popular item with the unrelated item, as we see in this case. This effect is often called the Harry Potter effect. People who bought just about any book that you can think of, also bought a Harry Potter book.

Case in point is the “The Big Penis Book” – Amazon tells us that after viewing “The Big Penis Book” 8% of customers go on to by the Tales of Beedle the Bard from the Harry Potter series. It may be true that people who like big penises also like Harry Potter but it may not be the best recommendation.

(BTW, I often use examples from Amazon to highlight issues with recommendation. This doesn’t mean that Amazon has a bad recommender – in fact I think they have one of the best recommenders in the world. Whenever I go to Amazon to buy one book, I end up buying five because of their recommender. The issues that I show are not unique to the Amazon recommender. You’ll find the same issues with any other CF-based recommender.)

Popularity Bias

sxsw-ipod-emo-v6011

One effect of this Harry Potter problem is that a recommender will associate the popular item with many other items. The result is that the popular item tends to get recommended quite often and since it is recommended often, it is purchased often. This leads to a feedback loop where popular items get purchased often because they are recommended often and are recommended often because they are purchased often. This  ‘rich-get-richer’ feedback loop leads to a system where popular items become extremely popular at the expense of the unpopular. The overall diversity of recommendations goes down. These feedback loops result in a recommender that pushes people toward more popular items and away from the long tail. This is exactly the opposite of what we are hoping that our recommenders will do. Instead of helping us find new and interesting music in the long tail, recommenders are pushing us back to the same set of very popular artists.

Note that you don’t need to have a fancy recommender system to be susceptible to these feedback loops. Even simple charts such as we see at music sites like the hype machine can lead to these feedback loops. People listen to tracks that are on the top of the charts, leading these songs to continue to be popular, and thus cementing their hold on the top spots in the charts.

The Novelty Problem

sxsw-ipod-emo-v6012

There is a difference between a recommender that is designed for music discovery and one that is designed for music shopping. Most recommenders are intended to help a store make more money by selling you more things. This tends to lead to recommendations such as this one from Amazon – that suggests that since I’m interested in Sgt. Pepper’s Lonely Hearts Club Band that I might like Abbey Road and Please Please Me and every other Beatles album. Of course everyone in the world already knows about these items so these recommendations are not going to help people find new music. But that’s not the point, Amazon wants to sell more albums and recommending Beatles albums is a great way to do that.

One factor that is contributing to the Novelty Problem is high stakes evaluations like the Netflix prize. The Netflix prize is a competition that offers a million dollars to anyone that can improve the Netflix movie recommender by 10%. The evaluation is based on how well a recommender can predict how a movie viewer will rate a movie on a 1-5 star scale. This type of evaluation focuses on relevance – a recommender that can correctly predict that I’ll rate the movie ‘Titanic’ 2.2 stars instead of 2.0 stars – may score well in this type of evaluation, but that probably hasn’t really improved the quality of the recommendation. I won’t watch a 2.0 or a 2.2 star movie, so what does it matter. The downside of the Netflix prize is that only one metric – relevance – is being used to drive the advancement of recommender state-of-the-art when there are other equally import metrics – novelty is one of them.

The Napoleon Dynamite Problem

sxsw-ipod-emo-v6013

Some items are not always so easy to categorize. For instance, if you look at the ratings for the movie Napoleon Dynamite you see a bimodal distribution of 5 stars and 1 stars. People either like it or hate it, and it is hard to predict how an individual will react.

The Opacity Problem

sxsw-ipod-emo-v6014

Here’s an Amazon recommendation that suggests that if I like Nine Inch Nails that I might like Johnny Cash. Since NiN is an industrial band and Johnny Cash is a country/western singer, at first blush this seems like a bad recommendation, and if you didn’t know any better you may write this off as just another broken recommender. It would be really helpful if the CF recommender could explain why it is recommending Johnny Cash, but all it can really tell you is that ‘Other people who listened to NiN also listened to Johnny Cash’ which isn’t very helpful. If the recommender could give you a better explanation of why it was recommending something – perhaps something like “Johnny Cash has an absolutely stunning cover of the NiN song ‘hurt’ that will make you cry.” – then you would have a much better understanding of the recommendation. The explanation would turn what seems like a very bad recommendation into a phenomenal one – one that perhaps introduces you to whole new genre of music – a recommendation that may have you listening ‘Folsom Prison’ in a few weeks.

Hacking the Recommender

sxsw-ipod-emo-v6015

Here’s a recommendation based on a book by Pat Robertson called Six Steps to Spiritual Revival (courtesy of Bamshad Mobasher). This is a book by notorious televangelist Pat Roberston that promises to reveal “Gods’s Awesome Power in your life.” Amazon offers a recommendation suggesting that ‘Customers who shopped for this item also shopped for ‘The Ultimate Guide to Anal Sex for Men’. Clearly this is not a good recommendation. This bad recommendation is the result of a loosely organized group who didn’t like Pat Roberston, so they managed to trick the Amazon recommender into recommending a rather inappropriate book just by visiting the Amazon page for Robertson’s book and then visiting the Amazon page for the sex guide.

This manipulation of the Amazon recommender was easy to spot and can be classified as a prank, but it is not hard to image that an artist or a label may use similar techniques, but in a more subtle fashion to manipulate a recommender to promote their tracks (or to demote the competition). We already live in a world where search engine optimization is an industry. It won’t be long before recommender engine optimization will be an equally profitable (and destructive) industry.

Wrapping up

This is the first part of a two part post. In this post I’ve highlighted some of the issues in traditional music recommendation. Next post is all about how to fix these problems.  For an alternative view be sure to visit Anthony Volodkin’s blog where he presents  a rather different viewpoint about music recommendation.

, , , , , ,

19 Comments