Posts Tagged playlists
MOG has posted a video demonstrating their new playlist editor for the soon to be released MOG all access.
It looks pretty nifty – lightweight, tag-able, shareable playlists. It’s nice to see MOG paying attention to playlists. With millions of songs to chose from, music discovery gets to be a big problem. Playlists can help with that. Of course, playlists bring their own discovery problems. How can I discovery new playlists that contain music that I like? Currently, most sites that support playlists and playlist sharing only provide limited ways for people to discover new playlists. However, as playlists become more ubiquitous, sites like MOG will need to expand the tools for helping people find new and interesting playlists. Some options for playlist discovery:
- Search for playlists by tag. Example: “Find playlists tagged ’emo’ and ‘christmas'”
- Search for playlist by artist / track. Example: “Find playlists that have songs by Deerhoof”
- Query by example. Example: “Find playlists that are similar to this playlist”
- Popularity. Example: “Play me the most popular playlists” or “Play me the Billboard hot 100”
- Social discovery. Example: “What playlists are my friends listening to now?”
- Expert curated. Example: “Give me the Pitchfork 100 playlist”
- Machine made. Example: “Build me a playlist that is similar to this playlist” or “Build me a playlist for tags: ’emo’, ‘female’, ’90s'”
- Recommended playlists. Example: “Find me playlists that I will like based upon my music taste and my context (e.g. the time of day).
It’s good to see MOG working hard to make the creation of playlists easy. Next step is to make finding new and interesting playlists easy.
In preparation for his defense, Claudio Baccigalupo has placed online his thesis: Poolcasting: an intelligent technique to customise music programmes for their audience. It looks to be an in depth look at playlisting.
Here’s the abstract:
Poolcasting is an intelligent technique to customise musical sequences for groups of listeners. Poolcasting acts like a disc jockey, determining and delivering songs that satisfy its audience. Satisfying an entire audience is not an easy task, especially when members of the group have heterogeneous preferences and can join and leave the group at different times. The approach of poolcasting consists in selecting songs iteratively, in real time, favouring those members who are less satisfied by the previous songs played.
Poolcasting additionally ensures that the played sequence does not repeat the same songs or artists closely and that pairs of consecutive songs ‘flow’ well one after the other, in a musical sense. Good disc jockeys know from expertise which songs sound well in sequence; poolcasting obtains this knowledge from the analysis of playlists shared on the Web. The more two songs occur closely in playlists, the more poolcasting considers two songs as associated, in accordance with the human experiences expressed through playlists. Combining this knowledge and the music profiles of the listeners, poolcasting autonomously generates sequences that are varied, musically smooth and fairly adapted for a particular audience.
A natural application for poolcasting is automating radio programmes. Many online radios broadcast on each channel a random sequence of songs that is not affected by who is listening. Applying poolcasting can improve radio programmes, playing on each channel a varied, smooth and group-customised musical sequence. The integration of poolcasting into a Web radio has resulted in an innovative system called Poolcasting Web radio. Tens of people have connected to this online radio during one year providing first-hand evaluation of its social features. A set of experiments have been executed to evaluate how much the size of the group and its musical homogeneity affect the performance of the poolcasting technique.
I’m quite interested in this topic so it looks like my reading list is set for the week.
Last night I was watching the pilot for Glee (a snarky TV version of High school musical) with my 3 teenage daughters. I was surprised to hear the soundtrack filled with songs by the band Journey, songs that brought me back to my own high school years. The thing that I like the most about Journey is that many of their songs have this slow and gradual build up over the course of the whole song as in this song Lovin Touchin Squeezin:
A number of my favorite songs have this slow build up. The canonical example is Zep’s ‘Stairway to Heaven’ – it starts with a slow acoustic guitar and over the course of 8 minutes builds to metal frenzy. I thought it would be fun to see if I could write a bit of software that could find the songs that have the same arc as ‘Stairway to Heaven’ or ‘Lovin, Touchin Squeezin’ – songs that have this slow build. With this ‘stairway detector’ I could build playlists filled with the songs that fire me up.
The obvious place to start with is to look how the loudness of a song changes overtime. To do this I used the Echo Nest developer API to extract the loudness as a function of time for Journey’s Lovin, Touchin Squeezin:
In this plot the light green curve is the loudness, while the blue line is a windowed average of the loudness. This plot shows a nice rise in the volume over the course of the song. Compared to a song like the Beatles ‘Ticket to Ride’ that doesn’t have this upward slope:
From these two examples, it is pretty clear that we can build our stairway-detector just by looking at the average slope of the volume. The higher the slope, the bigger the build. Now, I suspect that there’s lots of ways to find the average slope of a bumpy line – but I like to always try the simplest thing that could possibly work first – and for me the simplest thing was to just divide the average loudness of the second half of the song by the average loudness of the first half of the song. So for example, with the Journey song the average loudness of the second half of the song is -15.86 db and the average of the first half of the song is -24.37 db. This gives us a ratio of 1.54, while ‘Ticket to ride’ gets a ratio of 1.06. Here’s the Journey song with averages shown:
With this new found metric I analyzed a few thousand of the tracks in my personal collection to find the songs with the biggest crescendos. The biggest of all was this song by Muse with a whopping score of 3.07:
The metric isn’t perfect. For instance, I would have expected Postal Services ‘Natural Anthem’ to have a high score because it has such a great build up, but it only gets a score of 1.19. Looking at the plot we can see why:
Of course, we can use this ratio to find tracks that go the other way, to find songs that gradually wind down. These seem to occur less frequently than the songs that build up. One example is Neutral Milk Hotel’s Two Headed Boy:
Despite the fact that I’m using a very naive metric to find the loudness slope, this stairway detector is pretty effective in finding songs that have that slow build. It’s another tool that I can use for helping to build interesting playlists. This is one of the really cool things about how the Echo Nest approaches music playlisting. By having an understanding of what the music actually sounds like, we can build much more interesting playlists than you get from genius-style playlists that only take into account artists co-occurrence.
Last week we released Pyechonest, a Python library for the Echo Nest API. Pyechonest gives the Python programmer access to the entire Echo Nest API including artist and track level methods. Now after 9 years working at Sun Microsystems, I am a diehard Java programmer, but I must say that I really enjoy the nimbleness and expressiveness of Python. It’s fun to write little Python programs that do the exact same thing as big Java programs. For example, I wrote an artist radio program in Python that, given a seed artist, generates a playlist of tracks by wandering around the artists in the neighborhood of the seed artists and gathering audio tracks. With Pyechonest, the core logic is 10 lines of code:
def wander(band, max=10): played =  while max: if band.audio(): audio = random.choice(band.audio()) if audio['url'] not in played: play(audio) played.append(audio['url']) max -= 1 band = random.choice(band.similar()) (You can see/grab the full code with all the boiler plate in the SVN repository)
This method takes a seed artist (band) and selects a random track from set of audio that The Echo Nest has found on the web for that artist, and if we haven’t already played it, then do so. Then we select a near neighbor to the seed artist and do it all again until we’ve played the desired number of songs.
For such a simple bit of code, the playlists generated are surprisingly good..Here are a few examples:
Seed Artist: Led Zeppelin:
- You Shook Me by Led Zeppelin via licorice-pizza
- Suicide by Thin Lizzy via dmg541
- I Ain’t The One by Lynyrd Skynrd via artdecade
- Fortunate Son by Creedence Clearwater Revival via onesweetsong
- Susie-Q by Dale Hawkins via boogiewoogieflu
(I think the Dale Hawkins version of Susie-Q after CCR’s Fortunate Son is just brilliant)
Seed Artist: The Decemberists:
- The Wanting Comes In Waves/Repaid by The Decemberists via londononburgeoningmetropolis
- Amazing Grace by Sufjan Stevens via itallstarted
- Baby’s Romance by Chris Garneau via slowcoustic
- Saint Simon by The Shins via pastaprima
- Made Up Love Song #43 by Guillemots via merryswankster
(Note that audio for these examples is audio found on the web – and just like anything on the web the audio could go away at any time)
I think these artist-radio style playlists rival just about anything you can find on current Internet radio sites – which ain’t to0 bad for 10 lines of code.
There’s a new application in the Echo Nest developer showcase called SynchStep. SynchStep is an iPhone/iTouch application (currently only for jailbroken devices) that automatically synchronizes the music to your walking or running pace. SynchStep uses the Echo Nest Analyze API to extract the tempo for each song in your collection and when you are out for a walk or a run it will pick a song that matches your tempo.
I’ve seen a few academic systems that do this sort of thing. For instance, at last year’s ISMIR there was a paper called Development of an automatic music selection system based on runner’s step frequency that described a similar system. But SynchStep is the first system I’ve seen that is available to the general public on a popular platform like the iPhone.
SynchStep is a great example of a context-sensitive playlister. Instead of a list of songs selected via a random number generator, or via a DJ sitting in some dark, smoke-filled sound booth you get a playlist that matches what you are doing. I think we are going to see more attention paid to context-sensitive playlists: ‘Music for the root canal’, ‘Music that synchronizes with my windshield wipers’, ‘music for that first date with that girl who you think may be kind of emo’, and so on. To make these kind of playlists, the playlist generators will have to know what you are doing, and they’ll have to know what the music sounds like. Platforms like the iPhone already provides lots of context – the iPhone knows where you are, what time it is, it can hear you, it can see you, it can feel you move, it knows that the emo girl just sent you a ‘dear john’ IM, it can even hear your heartbeat. Signal processing and music analysis provides the other piece of the puzzle – knowing what the music sounds like. Just like SynchStep picks a track with a tempo that matches your pace, these next generation, context-aware playlisters will select music that fits the context. So when that kind-of-emo girl dumps you, your iPhone will know about it and will try to cheer you up with a little Katrina & The Waves. This song has super powers, it can even make the emo boys happy.