Archive for category research

Reidentification of artists and genres in the KDD cup data

Back in February I wrote a post about the KDD Cup ( an annual Data Mining and Knowledge Discovery competition), asking whether this year’s cup  was really music recommendation since all the data identifying the music had been anonymized.  The post received a number of really interesting comments about the nature of recommendation and whether or not context and content was really necessary for music recommendation, or was user behavior all you really needed.   A few commenters suggested that it might be possible  de-anonymize the data using a constraint propagation technique.

Many voiced an opinion that such de-anonymizing of the data to expose user listening habits would indeed be unethical. Malcolm Slaney, the researcher at Yahoo! who prepared the dataset offered the plea:

If you do de-anonymize the data please don’t tell anybody. We’ll NEVER be able to release data again.

As far as I know, no one has de-anonymized the KDD Cup dataset, however, researcher Matthew J. H. Rattigan of The University of Massachusetts at Amherst has done the next best thing.  He has published a paper called Reidentification of artists and genres the KDD cup that shows that by analyzing at the relational structures within the dataset it is possible to identify the artists, albums, tracks and genres that are used in the anonymized dataset.   Here’s an excerpt from the paper that gives an intuitive description of the approach:

For example, consider Artist 197656 from the Track 1 data. This artist has eight albums described by different combinations of ten genres. Each album is associated with several tracks, with track counts ranging from 1 to 69. We make the assumption that these albums and tracks were sampled without replacement from the discography of some real artist on the Yahoo! Music website. Furthermore, we assume that the connections between genres and albums are not sampled; that is, if an album in the KDD Cup dataset is attached to three genres, its real-world counterpart has exactly three genres (or “Categories”, as they are known on the Yahoo! Music site).

Under the above assumptions, we can compare the unlabeled KDD Cup artist with real-world Yahoo! Music artists in order to find a suitable match. The band Fischer Z, for example, is an unsuitable match, as their online discography only contains seven albums. An artist such as Meatloaf certainly has enough albums (56) to be a match, but none of those albums contain more than 31 tracks. The entry for Elvis Presley contains 109 albums, 17 of which boast 69 or more tracks; however, there is no consistent assignment of genres that satisfies our assumptions. The band Tool, however, is compatible with Artist 197656. The Tool discography contains 19 albums containing between 0 and 69 tracks. These albums are described by exactly 10 genres, which can be assigned to the unlabeled KDD Cup genres in a consistent manner. Furthermore, the match is unique: of the 134k artists in our labeled dataset, Tool is the only suitable match for Artist 197656.

Of course it is impossible for Matthew to evaluate his results directly, but he did create a number of synthetic, anonymized datasets draw from Yahoo and was able to demonstrate very high accuracy for the top artists and a 62% overall accuracy.

The motivation for this type of work is not to turn the KDD cup dataset into something that music recommendation researchers could use, but instead is to get a better understanding of data privacy issues.  By understanding how large datasets can be de-anonymized, it will be easier for researchers in the future to create datasets that won’t be easily yield their hidden secrets.   The paper is an interesting read – so since you are done doing all of your reviews for RecSys and ISMIR, go ahead and give it a read:  https://www.cs.umass.edu/publication/docs/2011/UM-CS-2011-021.pdf.  Thanks to @ocelma for the tip.

, ,

1 Comment

catfish smooth

Kurt Jacobson is a recent additions to the staff here at The Echo Nest. Kurt has built a music exploration site  called  catfish smooth that allows you to explore the connections between artists.   Kurt describes it as:  all about connections between music artists. In a sense, it is a music artist recommendation system but more. For each artist, you will see the type of “similar artist” recommendations to which you are accustomed – we use last.fm and The Echo Nest to get these. But you will also see some other inter-artist connections catfish has discovered from the web of linked data. These include things like “artists that are also English Male Singers” or “artists that are also Converts To Islam” or “artists that are also People From St.Louis, Missouri”. And, hopefully, you’ll get some media for each artist so you can have a listen.

It’s a really interesting way to explore the music space, allowing you to stumble upon new artists based on a wide range of parameters.

For example take a look at the many categories and connections catfish smooth exposes for James Brown.

Kurt is currently conducting a usability survey for catfish smooth, so take a minute to kick the tires and then help Kurt finish his PhD and take the survey.

Leave a comment

SongCards – an untapped app

I saw this interesting video from IDEO for c60 – an RFID-based interface that ‘reintroduces physicality to music, something lost with digitization and the move to the cloud.’

This video got me excited, because it is the hardware piece of an idea that my friend Steve Green and I  had called ‘SongCards’ while working at Sun Labs a few years ago.  We pitched SongCards to Sun’s management (Sun was big into RFID at the time so it seemed like a good fit), but Sun didn’t bite – they decided to go buy MySql instead.  And so this concept has been gathering digital dust in a text file on my laptop.  The c60 video has inspired me to dust it off and post i here.   I think there are some good ideas embedded in the concept. Perhaps the folks at IDEO will incorporate some into the c60, or maybe Eliot will add this idea to his Untapped Apps portfolio on Evolver.fm.

Here’s the concept in all its glory:

Observations

  • Many people have a physical connection with their music.  These people like to organize, display and interact with their music via the containers (album covers, cd cases).
  • Music is a highly social medium.  People enjoy sharing music with others.  People learn about new music from others in their social circle.
  • The location where music is stored will likely switch from devices managed by the listener to devices managed by a music service.  In the future, a music purchaser will purchase the right to listen to a particular song, while the actual music data will remain managed by the music service.
  • Digital music lacks much of the interesting metadata that previous generations of music listeners enjoyed – lyrics, photos of the performers, song credits.  The experience of reading the liner notes while listening to a new album has been lost in this new generation of digital music.Music is collectable. People take pride in amassing large collections of music and like to be able to exhibit their collection to others.

The Problem

The digital music revolution and the inevitable move of our music from our CD racks, iPods and computers, to the back room at Yahoo, Apple, or Google will make it convenient for people to listen to music in all sorts of new ways, however at the same time it will eliminate many of the interactions people have had with the music.  People can’t interact with the albums, read the liner notes, display their collection. They can’t trade songs with their friends. There is no way to show off a music collection beyond saying “I have 2,025 songs on my iPod”. Album art is a dying art.

weMusic collecting is not just about the music, it also is about the things that surround the music. Digital music has stripped away all of the packaging, and at the same time has stripped away a big part of the music collecting experience.  We want to change that.

The Idea

Imagine if you could buy music like you buy collectable trading cards such as Magic the Gathering, or Pokemon cards.  One could buy a pack of cards at the local 7-11 for a few dollars.  The cards could be organized by genre.  You could buy a pack of ‘boy-band’, ‘alternative-grunge’, ‘brit-pop’, ‘British invasion ‘, ‘drum and bass’ etc.  Each pack would contain 5 or 10 cards draw from the genre of the pack. Each card would have all of the traditional liner note metadata: lyrics, album art, artist bios.  Also associated with each card would be a unique ID that can be read by an electronic reader that would identify the unique instance of the card and the song/performance that the card represents.  The new owner of the card would add the song to their personal collection by just presenting it to any of the owner’s music players (presumably they are connected to the music server run by the music service provider). Once this is done, the user can access and play the song at any time on any of their music devices.

A package of music cards can be packaged in the same way as other trading cards are packaged. Typically in each pack there are one or two ‘special’ cards that are highly desirable. For music cards these would be the highly desirable ‘hit cards’.  The bulk of the cards in a deck could be made up of lesser known, or less popular bands.   For instance a ‘British invasion’ card set may contain ‘hey jude’ as the special card, and a few lesser known Beatles songs, and few songs by “the who” and perhaps some by “the monkees” and other songs by bands of that era.  This method of packing music would allow for serendipitous discovery of music since you would never know what songs you would get in the pack.  It would also encourage music trading, since you could trade your duplicate songs with other music collectors.

Trading – since the cards represent a song by a digital id, trading a song is as simple as giving the card to someone. As soon as the new owner of the card puts the card into one of their music players the transfer of ownership would occur, the song would be added to the collection of the new owner and removed from the collection of the old owner.  There would be no limit to how often a song could be traded.

Some interesting properties of music cards:

  • Your music collection once again has a physical presence. You can touch it, you can browse through it, you can stack it, you can show it off.
  • You can easily and legally trade or sell music with your friends (or on eBay for that matter).  Supply and demand economics can take hold in the music card after market (just as we’ve seen with Beanie Babies and Magic cards).Cards can be grouped in packages for sale using a number of criteria such as genre, popularity, geography, appeal to a certain demographic.
  • You can make playlists by ordering your cards.
  • You can make a random playlist by shuffling your cards.
  • At a social gathering, cards from many people can be combined into a single uber-playlist.
  • You will be potentially exposed to new music every time you buy a new pack of cards.
  • You will not need to carry your cards with you when you want to listen to music (the music service knows what music you own).
  • Since the music service ‘knows’ what music you own it can monitor trades and music popularity to track trend setters within a social group and target appropriate marketing at the trend setters.
  • Song cards can’t be ‘ripped’ in the traditional sense, giving music companies much more control over their intellectual property.

Some interesting variations:

  • The artwork on the back of a card could be one section of the album art for a whole album.  You could tack the cards up on the wall to form the album art when you have the whole album.
  • Some of the cards could be power cards that act as modifiers:
    • ‘More Like This‘ when inserted into a playlist, plays a similar song to the previously played card. The similar song is drawn from the entire music service collection not just the songs owned by the collector.
    • Genre Wild card’ – plays a random song from the genre.  The similar song is drawn from the entire music service collection not just the songs owned by the collector.
    • Musical Journey‘ – make a musical journey between the surrounding cards.  The songs on the journey are drawn from the entire music service collection not just the songs owned by the collector.
    • ‘Album Card’ – it’s not just a song, it’s a whole album.

Note that I don’t think that SongCards would replace all digital music sales. It would still be possible to purchase and download a song from iTunes as one can do today.  I think that SongCards would appeal to the ‘Music Collector’, while the traditional download would appeal to the ‘Music Listener’.

That’s it – SongCards – Just imagine what the world would be like if Sun had invested $800 million on SongCards instead of that open source database.

, ,

3 Comments

LastFM-ArtistTags2007

A few years back I created a data set of social tags from Last.fm. RJ at Last.fm graciously gave permission for me to distribute the dataset for research use.  I hosted the dataset on the media server at Sun Labs. However, with the Oracle acquisition, the media server is no longer serving up the data, so I thought I would post the data elsewhere.

The dataset is now available for download here: Lastfm-ArtistTags2007

Here are the details as told in the README file:

The LastFM-ArtistTags2007 Data set
Version 1.0
June 2008

What is this?

    This is a set of artist tag data collected from Last.fm using
    the Audioscrobbler webservice during the spring of 2007.

    The data consists of the raw tag counts for the 100 most
    frequently occuring tags that Last.fm listeners have applied
    to over 20,000 artists.

    An undocumented (and deprecated) option of the audioscrobbler
    web service was used to bypass the Last.fm normalization of tag
    counts.  This data set provides raw tag counts.

Data Format:

  The data is formatted one entry per line as follows:

  musicbrainz-artist-id<sep>artist-name<sep>tag-name<sep>raw-tag-count

Example:

    11eabe0c-2638-4808-92f9-1dbd9c453429<sep>Deerhoof<sep>american<sep>14
    11eabe0c-2638-4808-92f9-1dbd9c453429<sep>Deerhoof<sep>animals<sep>5
    11eabe0c-2638-4808-92f9-1dbd9c453429<sep>Deerhoof<sep>art punk<sep>21
    11eabe0c-2638-4808-92f9-1dbd9c453429<sep>Deerhoof<sep>art rock<sep>18
    11eabe0c-2638-4808-92f9-1dbd9c453429<sep>Deerhoof<sep>atmospheric<sep>4
    11eabe0c-2638-4808-92f9-1dbd9c453429<sep>Deerhoof<sep>avantgarde<sep>3

Data Statistics:

    Total Lines:      952810
    Unique Artists:    20907
    Unique Tags:      100784
    Total Tags:      7178442

Filtering:

    Some minor filtering has been applied to the tag data.  Last.fm will
    report tag with counts of zero or less on occasion. These tags have
    been removed.

    Artists with no tags have not been included in this data set.
    Of the nearly quarter million artists that were inspected, 20,907
    artists had 1 or more tags.

Files:

    ArtistTags.dat  - the tag data
    README.txt      - this file
    artists.txt     - artists ordered by tag count
    tags.txt        - tags ordered by tag count

License:

    The data in LastFM-ArtistTags2007 is distributed with permission of
    Last.fm.  The data is made available for non-commercial use only under
    the Creative Commons Attribution-NonCommercial-ShareAlike UK License.
    Those interested in using the data or web services in a commercial
    context should contact partners at last dot fm. For more information
    see http://www.audioscrobbler.net/data/

Acknowledgements:

    Thanks to Last.fm for providing the access to this tag data via their
    web services

Contact:

    This data was collected, filtered and by Paul Lamere of The Echo Nest. Send
    questions or comments to Paul.Lamere@gmail.com

 

,

1 Comment

Is that a million songs in your pocket, or are you just glad to see me?

Yesterday, Steve Jobs reminded us that it was less than 10 years ago when Apple announced the first iPod which could put a thousand songs in your pocket.  With the emergence of cloud-based music services like Spotify and Rhapsody, we can now have a virtually endless supply of music in our pocket.  The  ‘bottomless iPod’ will have as big an effect on how we listen to music as the original iPod had back in 2001.  But with millions of songs to chose from, we will need help finding music that we want to hear.  Shuffle play won’t work when we have a million songs to chose from.  We will need new tools that help us manage our listening experience.  I’m convinced that one of these tools will be intelligent automatic playlisting.

This weekend at the Music Hack Day London, The Echo Nest is releasing the first version of our new Playlisting API.  The Playlisting API  lets developers construct playlists based on a flexible set of artist/song selection and sorting rules.  The Echo Nest has deep data about millions of artists and songs.  We know how popular Lady Gaga is, we know the tempo of every one of her songs,  we know other artists that sound similar to her, we know where she’s from, we know what words people use to describe her music (‘dance pop’, ‘club’, ‘party music’, ‘female’, ‘diva’ ).  With the Playlisting API we can use this data to select music and arrange it in all sorts of flexible ways – from very simple Pandora radio style playlists of similar sounding songs to elaborate playlists drawing on a wide range of parameters.  Here are some examples of the types of playlists you can construct with the API:

  • Similar artist radio – generate a playlist of songs by similar artists
  • Jogging playlist – generate a playlist of 80s power pop with a tempo between 120 and 130 BPM, but never ever play Bon Jovi
  • London Music Hack Day Playlist -generate a playlist of electronic and techno music by unknown artists near London, order the tracks by tempo from slow to fast
  • Tomorrow’s top 40 – play  the hottest songs by  pop artists with low familiarity that are starting to get hottt
  • Heavy Metal Radio – A DMCA-Compliant radio stream of nothing but heavy metal

We have also provide a dynamic playlisting API that will allow for the creation of playlists that adapt based upon skipping and rating behavior of the listener.

I’m about to jump on a plane for the Music Hackday London where we will be demonstrating this new API and some cool apps that have already been built upon it.    I’m  hoping to see a few apps emerge from this Music Hack Day that use  the new API.  More info about the APIs and how you can use it to do all sorts of fun things will be forthcoming.  For the motivated dive into the APIs right now.

3 Comments

Upbeat and Quirky, With a Bit of a Build: Interpretive Repertoires in Creative Music Search

Upbeat and Quirky, With a Bit of a Build: Interpretive Repertoires in Creative Music Search
Charlie Inskip, Andy MacFarlane and Pauline Rafferty

ABSTRACT Pre-existing commercial music is widely used to accompany moving images in films, TV commercials and computer games. This process is known as music synchronisation. Professionals are employed by rights holders and film makers to perform creative music searches on large catalogues to find appropriate pieces of music for syn- chronisation. This paper discusses a Discourse Analysis of thirty interview texts related to the process. Coded examples are presented and discussed. Four interpretive re- pertoires are identified: the Musical Repertoire, the Soundtrack Repertoire, the Business Repertoire and the Cultural Repertoire. These ways of talking about music are adopted by all of the community regardless of their interest as Music Owner or Music User.

Music is shown to have multi-variate and sometimes conflicting meanings within this community which are dynamic and negotiated. This is related to a theoretical feedback model of communication and meaning making which proposes that Owners and Users employ their own and shared ways of talking and thinking about music and its context to determine musical meaning. The value to the music information retrieval community is to inform system design from a user information needs perspective.

Leave a comment

What Makes Beat Tracking Difficult? A Case Study on Chopin Mazurkas

What Makes Beat Tracking Difficult? A Case Study on Chopin Mazurkas
Peter Grosche, Meinard Müller and Craig Stuart Sapp

ABSTRACT – The automated extraction of tempo and beat information from music recordings is a challenging task. Especially in the case of expressive performances, current beat tracking approaches still have significant problems to accurately capture local tempo deviations and beat positions. In this paper, we introduce a novel evaluation framework for detecting critical passages in a piece of music that are prone to tracking errors. Our idea is to look for consistencies in the beat tracking results over multiple performances of the same underlying piece. As another contribution, we further classify the critical passages by specifying musical properties of certain beats that frequently evoke trac ing errors. Finally, considering three conceptually different beat tracking procedures, we conduct a case study on the basis of a challenging test set that consists of a variety of piano performances of Chopin Mazurkas. Our experimental results not only make the limitations of state-of-the-art beat trackers explicit but also deepens the understanding of the underlying music material.

Leave a comment