Spotifying over 200 Billboard charts
Posted by Paul in code, data, fun, web services on November 8, 2009
Yesterday, I Spotified the Billboard Hot 100 – making it easy to listen to the charts. This morning I went one step further and Spotified all of the Billboard Album and Singles charts.
The Spotified Billboard Charts
That’s 128 singles charts (which includes charts like Luxembourg Digital Songs, Hot Mainstream R&B/Hip-Hop Song and Hot Ringtones ) and 83 album charts including charts like Top Bluegrass Albums, Top Cast Albums and Top R&B Catalog Albums.
In these 211 charts you’ll find 6,482 Spotify tracks, 2354 being unique (some tracks, like Miley Cyrus’s ‘The Climb’ appear on many charts).
Building the charts stretches the API limits of the Billboard API (only 1,500 calls allowed per day!), as well as stretches my patience (making about 10K calls to the Spotify API while trying not to exceed the rate limit, means it takes a couple of hours to resolve all the tracks). Nevertheless, it was a fun little project. And it shows off the Spotify catalog quite well. For popular western music they have really good coverage.
Requests for the Billboard API: Please increase the usage limit by 10 times. 1,500 calls per day is really limiting, especially when trying to debug a client library.
Requests for the Spotify API: Please, Please Please!!! – make it possible to create and modify Spotify playlists via web services.
The Billboard Hot 100. In Spotify.
Posted by Paul in code, fun, web services on November 7, 2009
Inspired by Oscar’s 1001 Albums You Must Hear Before You Die …. in Spotify I put together an app that gets the Top charts from Billboard (using the nifty Billboard API) and resolves them to a Spotify ID – giving you a top 100 chart that you can play.
The Billboard Hot 100 in Spotify
Here’s the Top 10:
- I Gotta Feeling by The Black Eyed Peas
Weeks on chart:16 Peak:1 - Down by Jay Sean Lil Wayne
Weeks on chart:13 Peak:2 - Party In The U.S.A. by Miley Cyrus
Weeks on chart:7 Peak:2 - Run This Town by Jay-Z, Rihanna & Kanye West
Weeks on chart:9 Peak:2 - Whatcha Say by Jason DeRulo
Weeks on chart:7 Peak:5 - You Belong With Me by Taylor Swift
Weeks on chart:23 Peak:2 - Paparazzi by Lady Gaga
Weeks on chart:5 Peak:7 - Use Somebody by Kings Of Leon
Weeks on chart:35 Peak:4 - Obsessed by Mariah Carey
Weeks on chart:12 Peak:7 - Empire State Of Mind by Jay-Z + Alicia Keys
Weeks on chart:3 Peak:5
Note that the Billboard API purposely offers up slightly stale charts, so this is really the top 100 of a few weeks ago. I never listen to the Top 100, and I hadn’t heard of 50% of the artists so listening to the Billboard Top 100 was quite enlightening. I was surprised at how far removed the Top 100 is from the music that I (and everyone I know) listen to every day.
To build the list I used my Jspot – and a (yet to be released) Java client for the Billboard API. (If you are interested in this API, let me know and I’ll stick it up on google code). Of course it’d be really nifty if you could specify get and listen to a chart for a given week (i.e. let me listen to the Billboard chart for the week that I graduated from High School). Sound like something to do for Boston Music Hackday.
Update: I’ve made another list that is a little bit more inline with my own music tastes:
The Spotified Billboard Top Modern Rock/Alternative Albums
Who isn’t coming to Boston Music Hackday?
Look at all the companies and organizations coming to Boston Music Hack Day. Time is running out, only a few slots are left, so if you want to go, sign up soon.
Build one of these at Boston Music Hack Day
Noah Vawter will be holding a workshop during the Boston Music Hack Day where you can learn how to build a working prototype Exertion Instrument. It is unclear at this time if a leekspin lesson his included. Details on the Exertion Instrument site.
Where is my JSpot?
Posted by Paul in code, fun, Music, web services on November 3, 2009
I like Spotify. I like Java. So I combined them. Here’s a Java client for the new Spotify metadata API: JSpot
This client lets you do things like search for a track by name and get the Spotify ID for the track so you can play the track in Spotify. This is useful for all sorts of things like building web apps that use Spotify to play music, or perhaps to build a Playdar resolver so you can use Spotify and Playdar together.
Here’s some sample code that prints out the popularity and spotify ID for all versions of Weezer’s ‘My Name Is Jonas’.
Spotify spotify = new Spotify();
Results<Track> results = spotify.searchTrack("Weezer", "My name is Jonas");
for (Track track : results.getItems()) {
System.out.printf("%.2f %s \n", track.getPopularity(), track.getId());
}
This prints out:
If you have Spotify and you click on those links, and those tracks are available in your locale you should hear Weezer’s nerd anthem.
You can search for artists, albums and tracks and you can get all sorts of information back such as release dates for albums, countries where the music can be played, track length, popularity for artists, tracks and albums. It is very much a 0.1 release. The search functionality is complete so its quite useful, but I haven’t implemented the ‘lookup’ methods yet. There some javadocs. There’s a jar file: jspot.jar. And it is all open source: jspot at google code.
Poolcasting: an intelligent technique to customise music programmes for their audience
Posted by Paul in Music, music information retrieval, research on November 2, 2009
In preparation for his defense, Claudio Baccigalupo has placed online his thesis: Poolcasting: an intelligent technique to customise music programmes for their audience. It looks to be an in depth look at playlisting.
Here’s the abstract:
Poolcasting is an intelligent technique to customise musical sequences for groups of listeners. Poolcasting acts like a disc jockey, determining and delivering songs that satisfy its audience. Satisfying an entire audience is not an easy task, especially when members of the group have heterogeneous preferences and can join and leave the group at different times. The approach of poolcasting consists in selecting songs iteratively, in real time, favouring those members who are less satisfied by the previous songs played.
Poolcasting additionally ensures that the played sequence does not repeat the same songs or artists closely and that pairs of consecutive songs ‘flow’ well one after the other, in a musical sense. Good disc jockeys know from expertise which songs sound well in sequence; poolcasting obtains this knowledge from the analysis of playlists shared on the Web. The more two songs occur closely in playlists, the more poolcasting considers two songs as associated, in accordance with the human experiences expressed through playlists. Combining this knowledge and the music profiles of the listeners, poolcasting autonomously generates sequences that are varied, musically smooth and fairly adapted for a particular audience.
A natural application for poolcasting is automating radio programmes. Many online radios broadcast on each channel a random sequence of songs that is not affected by who is listening. Applying poolcasting can improve radio programmes, playing on each channel a varied, smooth and group-customised musical sequence. The integration of poolcasting into a Web radio has resulted in an innovative system called Poolcasting Web radio. Tens of people have connected to this online radio during one year providing first-hand evaluation of its social features. A set of experiments have been executed to evaluate how much the size of the group and its musical homogeneity affect the performance of the poolcasting technique.
I’m quite interested in this topic so it looks like my reading list is set for the week.
The Future of the Music Industry
Posted by Paul in Music, recommendation on November 1, 2009
Last week NPR’s On the Media had a special show called ‘The Future of Music’ – all about the current state of the music industry and where it is all going. The hour is broken into a number of sections:
- Facing the (Free) music – about what has happened in the 10 years since Napster – Yep Spotify gets a mention. Choice quote by Hilary Rosen – “Napster was a missed opportunity’
- They Say That I stole this – about the legalities of sampling (with interviews with Girl Talk among others)
- Played Out – interview with John Scher about the state of live music
- Teens on Tunes – interviews with teens about where they get their music. Answer: Limewire
- Charting the Charts – interesting piece about the charts – the history of billboard, and the next generation of tracking including an interview with Bandmetrics founder Duncan Freeman (way to go Duncan!)
- Why I’m not afraid to take your money – interesting interview with Amanda Palmer about how artists make money in today’s music world
One thing that they didn’t talk about at all was music discovery – no mention of the role of the critic, music blogs, hype machine, no discussion of the role social sites like last.fm play in music discovery, no mention of automated tools for music discovery like recommenders and playlisters. Maybe next year, when everyone has access to infinite music, we’ll see more emphasis on discovery tools.
It was a great show. Highly recommended: NPR’s On the Media Special Edition: The Future of the Music Industry
10 Awesome things about ISMIR 2009
ISMIR 2009 is over – but it will not be soon forgotten. It was a wonderful event, with seemingly flawless execution. Some of my favorite things about the conference this year:
- The proceedings – distributed on a USB stick hidden in a pen that has a laser! And the battery for the laser recharges when you plug the USB stick into your computer. How awesome is that!? (The printed version is very nice too, but it doesn’t have a laser).
- The hotel – very luxurious while at the same time, very affordable. I had a wonderful view of Kobe, two very comfortable beds and a toilet with more controls than the dashboard on my first car.
- The presentation room – very comfortable with tables for those sitting towards the front, great audio and video and plenty of power and wireless for all.
- The banquet – held in the most beautiful room in the world with very exciting Taiko drumming as entertainment.
- The details – it seems like the organizing team paid attention to every little detail and request – they had taped numbers on the floor so that the 30 folks giving their 30 second pitches during poster madness would know just where to stand, to the signs on the coffeepots telling you that the coffee was being made, to the signs on the train to the conference center welcoming us to ISMIR 2009. It seems like no detail was left to chance.
- The food – our stomachs were kept quite happy – with sweet breads and pastries every morning, bento boxes for lunch, and coffee, juices, waters, and the mysterious beverage ‘black’ that I didn’t dare to try. My absolute favorite meal was the box lunch during the tutorial day – it was a box with a string – when you are ready to eat you give the string a sharp tug – wait a few minutes for the magic to do its job and then you open the box and eat a piping hot bowl of noodles and vegetables. Almost as cool as the laser-augmented proceedings.
- The city – Kobe is a really interesting city – I spent a few days walking around and was fascinated by it all. I really felt like I was walking around in the future. It was extremely clean, the people will very polite, friendly and always willing to help. Going into some parts of town was sensory overload, the colors, sounds, smells, the sights were overwhelming – it was really fun.
- the Keynote – music making robots – what more is there to say.
- The Program – the quality of papers was very high – there was some outstanding posters and oral presentations. Much thanks to George and Keiji for organizing the reviews to create a great program. (More on my favorite posters and papers in an upcoming post)
- f(mir) – The student-organized workshop looked at what MIR research would look like in 10, 20 or even 50 years (basically after I’m dead and gone). The presentations in this workshop were quite provactive – well done students!
I write this post as I sit in the airport in Osaka waiting for my flight home. I’m tired, but very energized to explore the many new ideas that I encountered at the conference. It was a great week. I want to extend my personal thanks to Professor Fujinaga and Professor Goto and the rest of the conference committee for putting together a wonderful week.
ISMIR Oral Session – Sociology and Ethnomusicology
Session Title: Sociology & Ethnomusicology
Session Chair: Frans Wiering (Universiteit Utrecht, Netherland)
Exploring Social Music Behavior: An Investigation of Music Selection at Parties
Sally Jo Cunningham and David M. Nichols
Abstract: This paper builds an understanding how music is currently listened to by small (fewer than 10 individuals) to medium-sized (10 to 40 individuals) gatherings of people—how songs are chosen for playing, how the music fits in with other activities of group members, who supplies the music, the hardware/software that supports song selection and presentation. This fine-grained context emerges from a qualitative analysis of a rich set of participant observations and interviews focusing on the selection of songs to play at social gatherings. We suggest features for software to support music playing at parties.
Notes:
- What happens at parties, especially informal small and medium sized parties
- Observations and interviews – 43 party observations
- Analyzing the data: key events that drive the activity, patterns of behavior, social roles
- Observations
- music selection cannot require fine motor movements (because of drinking and holding their drings) (Drinking dislexia)
- Need for large displays
- Party collection from different donors, sources, media
- Pre-party: host collection
- As party progresses: additional contributions (ipods, thumbdrives, etc)
- Challenge: bring together into a single browseable searchable collection
- Roles: Host, guest, guest of honor. Host provides initial collection, party playlist. High stress ‘guilty pleasures’
- Guests: may contribute, could insult the host, may modify party playlist if receive the invitation from the host. Voting jukeboxes may help
- Guest of Honor had ultimate control
- insertion into playlist, looking for specific song, type of song.
- Delete songs from playlist without disrupting the party
- Setting and maintaining atmosphere
- softer for starts, move to faster louder, ending with chilling out
- What next:other situations, long car ride
- Questions: Spotify turned into the best party
Great study, great presentation.
Music and Geography: Content Description of Musical Audio from Different Parts of the World
Emilia Gómez, Martín Haro and Perfecto Herrera
Abstract: This paper analyses how audio features related to different musical facets can be useful for the comparative analysis and classification of music from diverse parts of the world. The music collection under study gathers around 6,000 pieces, including traditional music from different geographical zones and countries, as well as a varied set of Western musical styles. We achieve promising results when trying to automatically distinguish music from Western and non-Western traditions. A 86.68% of accuracy is obtained using only 23 audio features, which are representative of distinct musical facets (timbre, tonality, rhythm), indicating their complementarity for music description. We also analyze the relative performance of the different facets and the capability of various descriptors to identify certain types of music. We finally present some results on the relationship between geographical location and musical features in terms of extracted descriptors. All the reported outcomes demonstrate that automatic description of audio signals together with data mining techniques provide means to characterize huge music collections from different traditions, complementing ethnomusicological manual analysis and providing a link between music and geography.
You Call That Singing? Ensemble Classification for Multi-Cultural Collections of Music Recordings
Polina Proutskova and Michael Casey
Abstract: The wide range of vocal styles, musical textures and re- cording techniques found in ethnomusicological field recordings leads us to consider the problem of automatic- ally labeling the content to know whether a recording is a song or instrumental work. Furthermore, if it is a song, we are interested in labeling aspects of the vocal texture: e.g. solo, choral, acapella or singing with instruments. We present evidence to suggest that automatic annotation is feasible for recorded collections exhibiting a wide range of recording techniques and representing musical cultures from around the world. Our experiments used the Alan Lomax Cantometrics training tapes data set, to encourage future comparative evaluations. Experiments were con- ducted with a labeled subset consisting of several hun- dred tracks, annotated at the track and frame levels, as acapella singing, singing plus instruments or instruments only. We trained frame-by-frame SVM classifiers using MFCC features on positive and negative exemplars for two tasks: per-frame labeling of singing and acapella singing. In a further experiment, the frame-by-frame classifier outputs were integrated to estimate the predominant content of whole tracks. Our results show that frame-by- frame classifiers achieved 71% frame accuracy and whole track classifier integration achieved 88% accuracy. We conclude with an analysis of classifier errors suggesting avenues for developing more robust features and classifier strategies for large ethnographically diverse collections.
ISMIR Oral Session – Folk Songs
Session Title: Folk songs
Session Chair: Remco C. Veltkamp (Universiteit Utrecht, Netherland)
Global Feature Versus Event Models for Folk Song Classification
Ruben Hillewaere, Bernard Manderick and Darrell Conklin
Abstract: Music classification has been widely investigated in the past few years using a variety of machine learning approaches. In this study, a corpus of 3367 folk songs, divided into six geographic regions, has been created and is used to evaluate two popular yet contrasting methods for symbolic melody classification. For the task of folk song classification, a global feature approach, which summarizes a melody as a feature vector, is outperformed by an event model of abstract event features. The best accuracy obtained on the folk song corpus was achieved with an ensemble of event models. These results indicate that the event model should be the default model of choice for folk song classification.

Robust Segmentation and Annotation of Folk Song Recordings
Meinard Mueller, Peter Grosche and Frans Wiering
Abstract: Even though folk songs have been passed down mainly by oral tradition, most musicologists study the relation between folk songs on the basis of score-based transcriptions. Due to the complexity of audio recordings, once having the transcriptions, the original recorded tunes are often no longer studied in the actual folk song research though they still may contain valuable information. In this paper, we introduce an automated approach for segment- ing folk song recordings into its constituent stanzas, which can then be made accessible to folk song researchers by means of suitable visualization, searching, and navigation interfaces. Performed by elderly non-professional singers, the main challenge with the recordings is that most singers have serious problems with the intonation, fluctuating with their voices even over several semitones throughout a song. Using a combination of robust audio features along with various cleaning and audio matching strategies, our approach yields accurate segmentations even in the presence of strong deviations.
Notes: Interesting talk (as always) by Meinard about dealing with real world problems when dealing with folk song audio recordings.
Supporting Folk-Song Research by Automatic Metric Learning and Ranking
Korinna Bade, Andreas Nurnberger, Sebastian Stober, Jörg Garbers and Frans Wiering
Abstract: In folk song research, appropriate similarity measures can be of great help, e.g. for classification of new tunes. Several measures have been developed so far. However, a particular musicological way of classifying songs is usually not directly reflected by just a single one of these measures. We show how a weighted linear combination of different basic similarity measures can be automatically adapted to a specific retrieval task by learning this metric based on a special type of constraints. Further, we describe how these constraints are derived from information provided by experts. In experiments on a folk song database, we show that the proposed approach outperforms the underlying basic similarity measures and study the effect of different levels of adaptation on the performance of the retrieval system.





