Spotify iOS token exchange service in python

On the very same day that Spotify announced its acquisition of The Echo Nest they released a brand new Spotify iOS SDK.  Trying this new SDK out has been high on my priority list, and finally after a few crazy weeks I’ve had a bit of time to take it for a test drive.  I walked through the beginner’s tutorial and was up and running with an iOS app running in the simulator in about 30 minutes. Easy Peazy! The bit that took the longest was setting up the token exchange service. This is a service that you need to run on your own server as part of the authentication process. The tutorial provides such a sample service written in ruby, however I’m not a ruby programmer so I had to go through all the gyrations of installing ruby, figuring out how to install gems and getting the required gems installed. Once I had everything installed it worked fine and I was able to get the tutorial running. However, I figure that I’ll be working with the iOS SDK a great deal in my future, and I’d rather not have to deal with a ruby server every time I create a new app, and so for my Sunday morning programming project I’ve re-written the ruby swap service in python. The service is on github here: spotify_token_swap

If you are going to be using the new Spotify iOS SDK to create apps and you’d rather deal with python than ruby, then you might find it useful.

, ,

Leave a comment

Echo Nest Radio on Spotify

Spotify_-_The_Cult_–_Wild_Flower

I work for Spotify now – so for my Sunday morning programming project I thought I’d write a simple Spotify App that uses The Echo Nest API to create playlists based upon a seed song. I’ve done this before, but the last time was a few years ago and the Spotify Apps API has changed quite a bit since then, so I thought I’d use this as an opportunity to freshen my understanding of the Spotify API as well as to demonstrate how to write a Spotify App that uses The Echo Nest API.

I created an Echo Nest Radio app – it is a very simple app – it looks at what song you are currently playing and will generate an Echo Nest playlist based upon that seed song. The code is pretty straightforward. It grabs the Now Playing track from Spotify, gets the track’s ID and uses that as a seed for The Echo Nest song-radio static playlist API. This call returns Spotify track IDs (thanks to our Rosetta Stone ID mapping layer) that are then tossed into a temporary playlist, which is used to build a List view which is then displayed in the app. All told it is just over 100 lines of Javascript.

It did take me a bit of time to get the hang of the newer Spotify Apps API. It has changed quite a bit since I last used it and many of the examples that I relied on in the past, like Peter Watt’s excellent Kitchen Sink app, use an older version of the API. The new version has some significant changes including a nifty new module system along with a new approach to managing long-running queries that relies on promises. Once I got the hang of it, I decided that I like the new version – it makes for cleaner code and a much more efficient app since much less data needs to be moved around.

The app is on github – to use it you need to sign up for a developer account on Spotify and follow the rest of the Getting Started instructions (this means if you are not a developer, you’ll probably not be able to use the app).

The Spotify Apps API makes it super easy to be able to create apps that run inside Spotify. Its a very familiar programming environment for anyone who has done web programming. You can use all of your favorite libraries from jQuery to Lo-Dash to create really compelling apps that sit on top of the millions and millions of tracks in the Spotify catalog. However, unlike a web app where anyone can publish their app on the web, to publish a Spotify App you have to submit your app to the Spotify App approval process and only apps that Spotify approves are published. Spotify sets a high bar for what ultimately gets approved – which keeps the quality of the apps high, but also means that hacks and experiments built on the Spotify Apps platform will likely never be approved for release to the general public.  It’s a difficult balancing act for Spotify. They’ve built the ultimate music hacking platform with this API, but if they open the doors to every music hack, they will ultimately dilute the listening experience of the user – like so other many App stores that are flooded with garbage apps,  if they publish every app and hack then Spotify listeners would be inundated with the musical equivalent of flashlight and fart apps.  With the approval process, Spotify essentially says ‘the listener comes first’ which is the right choice.   Still, as a music hacker I do wish it was easier to publish rich music apps built on the Spotify platform. Luckily Spotify is committed to building an active and vibrant developer ecosystem so I don’t expect they we will be standing still.

Update 3/24/14: – I’ve added the ability to save these playlists back to Spotify, so you can take the Echo Nest radio playlists with you.

Second update 3/24/14 – note that Spotify’s recent announcement that they are closing app submissions means that you won’t be able to submit apps for publishing anymore, but you should be able to still create your own.

1 Comment

How the Autocanonizer works

Last week at the SXSW Music Hack Championship hackathon I built The Autocanonizer. An app that tries to turn any song into a canon by playing it against a copy of itself.  In this post, I explain how it works.

At the core of The Autocanonizer are three functions – (1) Build simultaneous audio streams for the two voices of the canon (2) Play them back simultaneously, (3) Supply a visualization that gives the listener an idea of what is happening under the hood.  Let’s look at each of these 3 functions:

(1A) Build simultaneous audio streams – finding similar sounding beats
The goal of the Autocanonizer is to fold a song in on itself in such a way that the result still sounds musical.  To do this, we use The Echo Nest analyzer and the jremix library to do much of the heavy lifting. First we use the analyzer to break the song down into beats. Each beat is associated with a timestamp, a duration, a confidence and a set of overlapping audio segments.  An audio segment contains a detailed description of a single audio event in the song. It includes harmonic data (i.e. the pitch content), timbral data (the texture of the sound) and a loudness profile.  Using this info we can create a Beat Distance Function (BDF) that will return a value that represents the relative distance between any two beats in the audio space. Beats that are close together in this space sound very similar, beats that are far apart sound very different. The BDF works by calculating the average distance between overlapping segments of the two beats where the distance between any two segments is a weighted combination of the euclidean distance between the pitch, timbral, loudness, duration and confidence vectors.  The weights control which part of the sound takes more precedence in determining beat distance. For instance we can give more weight to the harmonic content of a beat, or the timbral quality of the beat. There’s no hard science for selecting the weights, I just picked some weights to start with and tweaked them a few times based on how well it worked. I started with the same weights that I used when creating the Infinite Jukebox (which also relies on beat similarity), but ultimately gave more weight to the harmonic component since good harmony is so important to The Autocanonizer.

(1B)  Build simultaneous audio streams  - building the canon
The next challenge, and perhaps biggest challenge of the whole app, is to build the canon – that is  - given the Beat Distance Function, create two audio streams, one beat at a time, that sound good when played simultaneously. The first stream is easy, we’ll just play the beats in normal beat order. It’s the second stream, the canon stream that we have to worry about.  The challenge: put the beats in the canon stream in an order such that (1) the beats are in a different order than the main stream, and (2) they sound good when played with the main stream.

The first thing we can try is to make each beat in the canon stream be the most similar sounding beat to the corresponding beat in the main stream.  If we do that we end up with something that looks like this:

Someone_Like_You__autocanonized__by_Adele-2

It’s a rat’s nest of connections, very little structure is evident.  You can listen to what it sounds like by clicking here: Experimental Rat’s Nest version of Someone Like You (autocanonized).  It’s worth a listen to get a sense of where we start from.  So why does this bounce all over the place like this?  There are lots of reasons: First, there’s lots of repetition in music – so if I’m in the first chorus, the most similar beat may be in the second or third chorus – both may sound very similar and it is practically a roll of the dice which one will win leading to much bouncing between the two choruses. Second – since we have to find a similar beat for every beat, even beats that have no near neighbors have to be forced into the graph which turns it into spaghetti. Finally, the underlying beat distance function relies on weights that are hard to generalize for all songs leading to more noise.  The bottom line is that this simple approach leads to a chaotic and mostly non-musical canon with head-jarring transitions on the canon channel.  We need to do better.

There are glimmers of musicality in this version though. Every once in a while, the canon channel will remaining on a single sequential set of beats for a little while. When this happens, it sounds much more musical. If we can make this happen more often, then we may end up with a better sounding canon. The challenge then is to find a way to identify long consecutive strands of beats that fit well with the main stream.  One approach is to break down the main stream into a set of musically coherent phrases and align each of those phrases with a similar sounding coherent phrase. This will help us avoid the many head-jarring transitions that occur in the previous rat’s nest example. But how do we break a song down into coherent phrases? Luckily, it is already done for us. The Echo Nest analysis includes a breakdown of a song into sections – high level musically coherent phrases in the song – exactly what we are looking for. We can use the sections to drive the mapping.  Note that breaking a song down into sections is still an open research problem – there’s no perfect algorithm for it yet, but The Echo Nest algorithm is a state-of-the-art algorithm that is probably as good as it gets. Luckily, for this task, we don’t need a perfect algorithm. In the above visualization you can see the sections. Here’s a blown up version – each of the horizontal colored rectangles represents one section:

Someone_Like_You__autocanonized__by_Adele-2

You can see that this song has 11 sections. Our goal then is to get a sequence of beats for the canon stream that aligns well with the beats of each section. To make things at little easier to see, lets focus in on a single section. The following visualization shows the similar beat graph for a single section (section 3) in the song:

Someone_Like_You__autocanonized__by_Adele-10

You can see bundles of edges leaving section 3 bound for section 5 and 6.  We could use these bundles to find most similar sections and simply overlap these sections. However, given that sections are rarely the same length nor are they likely to be aligned to similar sounding musical events, it is unlikely that this would give a good listening experience. However, we can still use this bundling to our advantage. Remember, our goal is to find a good coherent sequence of beats for the canon stream. We can make a simplifying rule that we will select a single sequence of beats for the canon stream to align with each section. The challenge, then, is to simply pick the best sequence for each section. We can use these edge bundles to help us do this.  For each beat in the main stream section we calculate the distance to its most similar sounding beat.  We aggregate these counts and find the most commonly occurring distance. For example, there are 64 beats in Section 3.  The most common occurring jump distance to a sibling beat is 184 beats away.  There are ten beats in the section with a similar beat at this distance. We then use this most common distance of 184 to generate the canon stream for the entire section. For each beat of this section in the main stream, we add a beat in the canon stream that is 184 beats away from the main stream beat. Thus for each main stream section we find the most similar matching stream of beats for the canon stream. This visualizing shows the corresponding canon beat for each beat in the main stream.

Someone_Like_You__autocanonized__by_Adele-2

This has a number of good properties. First, the segments don’t need to be perfectly aligned with each other.  Note, in the above visualization that the similar beats to section 3 span across section 5 and 6. If there are two separate chorus segments that should overlap, it is no problem if they don’t start at the exactly the same point in the chorus. The inter-beat distance will sort it all out.  Second, we get good behavior even for sections that have no strong complimentary section.  For instance, the second section is mostly self-similar, so this approach aligns the section with a copy of itself offset by a few beats leading to a very nice call and answer.

Someone_Like_You__autocanonized__by_Adele-2

That’s the core mechanism of the autocanonizer  - for each section in the song, find the most commonly occurring distance to a sibling beat, and then generate the canon stream by assembling beats using that most commonly occurring distance.  The algorithm is not perfect. It fails badly on some songs, but for many songs it generates a really good cannon affect.  The gallery has 20 or so of my favorites.

 (2) Play the streams back simultaneously
When I first released my hack, to actually render the two streams as audio, I played each beat of the two streams simultaneously using the web audio API.  This was the easiest thing to do, but for many songs this results in some very noticeable audio artifacts at the beat boundaries.  Any time there’s an interruption in the audio stream there’s likely to be a click or a pop.  For this to be a viable hack that I want to show people I really needed to get rid of those artifacts.  To do this I take advantage of the fact that for the most part we are playing longer sequences of continuous beats. So instead of playing a single beat at a time, I queue up the remaining beats in the song, as a single queued  buffer.  When it is time to play the next beat, I check to see if is the same that would naturally play if I let the currently playing queue continue. If it is I ‘let it ride’ so to speak. The next beat plays seamlessly without any audio artifacts.  I can do this for both the main stream and the canon stream. This virtually elimianates all the pops and clicks.  However, there’s a complicating factor. A song can vary in tempo throughout, so the canon stream and the main stream can easily get out of sync. To remedy this, at every beat we calculate the accumulated timing error between the two streams. If this error exceeds a certain threshold (currently 50ms), the canon stream is resync’d starting from the current beat.  Thus, we can keep both streams in sync with each other while minimizing the need to explicitly queue beats that results in the audio artifacts.  The result is an audio stream that is nearly click free.

(3) Supply a visualization that gives the listener an idea of how the app works
I’ve found with many of these remixing apps, giving the listener a visualization that helps them understand what is happening under the hood is a key part of the hack. The first visualization that accompanied my hack was rather lame:

Let_It_Be__autocanonized__by_The_Beatles

It showed the beats lined up in a row, colored by the timbral data.  The two playback streams were represented by two ‘tape heads’ – the red tape head playing the main stream and the green head showing the canon stream.  You could click on beats to play different parts of the song, but it didn’t really give you an idea what was going on under the hood.  In the few days since the hackathon, I’ve spent a few hours upgrading the visualization to be something better.  I did four things: Reveal more about the song structure,  show the song sections, show, the canon graph and animate the transitions.

Reveal more about the song
The colored bars don’t really tell you too much about the song.  With a good song visualization you should be able to tell the difference between two songs that you know just by looking at the visualization.   In addition to the timbral coloring, showing the loudness at each beat should reveal some of the song structure.   Here’s a plot that shows the beat-by-beat loudness for the song stairway to heaven.

Stairway_To_Heaven__autocanonized__by_Led_Zeppelin

You can see the steady build in volume over the course of the song.  But it is still less than an ideal plot. First of all, one would expect the build for a song like Stairway to Heaven to be more dramatic than this slight incline shows.  This is because the loudness scale is a logarithmic scale.  We can get back some of the dynamic range by converting to a linear scale like so:

Stairway_To_Heaven__autocanonized__by_Led_Zeppelin

This is much better, but the noise still dominates the plot. We can smooth out the noise by taking a windowed average of the loudness for each beat. Unfortunately, that also softens the sharp edges so that short events, like ‘the drop’ could get lost. We want to be able to preserve the edges for significant edges while still eliminating much of the noise.  A good way to do this is to use a median filter instead of a mean filter.  When we apply such a filter we get a plot that looks like this:

Stairway_To_Heaven__autocanonized__by_Led_Zeppelin

The noise is gone, but we still have all the nice sharp edges.  Now there’s enough info to help us distinguish between two well known songs. See if you can tell which of the following songs is ‘A day in the life’ by The Beatles and which one is ‘Hey Jude’ by The Beatles.

A_Day_in_the_Life__autocanonized__by_The_Beatles

Which song is it? Hey Jude or A day in the Life?

07_-_Hey_Jude__autocanonized__by_Beatles__The

Which song is it? Hey Jude or A day in the Life?

Show the song sections
As part of the visualization upgrades I wanted to show the song sections to help show where the canon phrase boundaries are. To do this I created a the simple set of colored blocks along the baseline. Each one aligns with a section. The colors are assigned randomly.

Show the canon graph and animate the transitions.
To help the listener understand how the canon is structured, I show the canon transitions as arcs along the bottom of the graph. When the song is playing, the green cursor, representing the canon stream animates along the graph giving the listener a dynamic view of what is going on.  The animations were fun to do. They weren’t built into Raphael, instead I got to do them manually. I’m pretty pleased with how they came out.

Stairway_To_Heaven__autocanonized__by_Led_Zeppelin

All in all I think the visualization is pretty neat now compared to where it was after the hack. It is colorful and eye catching. It tells you quite a bit about the structure and make up of a song (including detailed loudness, timbral and section info). It shows how the song will be represented as a canon, and when the song is playing it animates to show you exactly what parts of the song are being played against each other.  You can interact with the vizualization by clicking on it to listen to any particular part of the canon.

Stairway_To_Heaven__autocanonized__by_Led_Zeppelin

Wrapping up  – this was a fun hack and the results are pretty unique. I don’t know of any other auto-canonizing apps out there. It was pretty fun to demo that hack at the SXSW Music Hack Championships too. People really seemed to like it and even applauded spontaneously in the middle of my demo.  The updates I’ve made since then – such as fixing the audio glitches and giving the visualization a face lift make it ready for the world to come and visit. Now I just need to wait for them to come.

,

Leave a comment

The Autocanonizer

I’ve spent the last 24 hours at the SXSW Music Hackathon championship. For my hack I’ve built something called The Autocanonizer. It takes any song and tries to make a canon out of it. A canon is a song that can be played against a copy of itself.  The Autocanonizer does this by looking at the detailed audio in the song (via The Echo Nest analysis), and looks for parts of the song that might overlap well. It builds a map of all these parts and when it plays the song it plays the main audio, while overlapping it with the second audio stream.  It doesn’t always work, but when it does, the results can be quite fun and sometimes quite pleasing.

To go along with the playback I created a visualization that shows the song and the two virtual tape heads that are playing the song. You can click on the visualization to hear particular bits.

Let_It_Be__autocanonized__by_The_Beatles

 

There are some audio artifacts on a few songs still. I know how to fix it, but it requires some subtle math (subtraction) that I’m sure I’ll mess up right before the hackathon demo if I attempt it now, so it will have to wait for another day. Also, there’s a Firefox issue that I will fix in the coming days.  Or you can go and fix all this yourself because the code is on github.

,

3 Comments

Beyond the Play Button – the future of listening

I’ve created a page with all the supporting info for my SXSW Talk called Beyond The Play Button – the future of listening.

Beyond the Play Button – the future of listening

The page contains slides, links to all supporting data and links to all apps demoed.

Leave a comment

The Echo Nest Is Joining Spotify: What It Means To Me, and To Developers

I love writing music apps and playing with music data. How much? Every weekend I wake up at 6am and spend a long morning writing a new music app or playing with music data. That’s after a work week at The Echo Nest, where I spend much of my time writing music apps and playing with music data. The holiday break is even more fun, because I get to spend a whole week working on a longer project like Six Degrees of Black Sabbath or the 3D Music Maze.

That’s why I love working for The Echo Nest. We have built an amazing open API, which anyone can use, for free, to create music apps. With this API, you can create phenomenal music discovery apps like Boil the Frog that take you on a journey between any two artists, or Music Popcorn, which guides you through hundreds of music genres, from the arcane to the mainstream. It also powers apps that create new music, like Girl Talk in a Box, which lets you play with your favorite song in the browser, or The Swinger, which creates a Swing version of any song:


One of my roles at The Echo Nest is to be the developer evangelist for our API — to teach developers what our API can do, and get them excited about building things with it. This is the best job ever. I get to go to all parts of the world to attend events like Music Hack Day and show a new crop of developers what they can do with The Echo Nest API. In that regard, it’s also the easiest job ever, because when I show apps that developers have built with The Echo Nest API, like The Bonhamizer and The Infinite Jukebox, or show how to create a Pandora-like radio experience with just a few lines of Python, developers can’t help but get excited about our API.

The_Echo_Nest_workshop___Flickr_-_Photo_Sharing_

Photo by Thomas Bonte

At The Echo Nest we’ve built what I think is the best, most complete music API — and now our API is about to get so much better.

spotify-logo-primary-vertical-light-background-rgbToday, we’ve announced that The Echo Nest has been acquired by Spotify, the award-winning digital music service. As part of Spotify, The Echo Nest will use our deep understanding of music to give Spotify listeners the best possible personalized music listening experience.

Spotify has long been committed to fostering a music app developer ecosystem. They have a number of APIs for creating apps on the web, on mobile devices, and within the Spotify application. They’ve been a sponsor and active participant in Music Hack Days for years now. Developers love Spotify, because it makes it easy to add music to an app without any licensing fuss. It has an incredibly huge music catalog that is available in countries around the world.

Spotify and The Echo Nest APIs already work well together. The Echo Nest already knows Spotify’s music catalog. All of our artist, song, and playlisting APIs can return Spotify IDs, making it easy to build a smart app that plays music from Spotify. Developers have been building Spotify / Echo Nest apps for years. If you go to a Music Hack Day, one of the most common phrases you’ll hear is, “This Spotify app uses The Echo Nest API”.

I am incredibly excited about becoming part of Spotify, especially because of what it means for The Echo Nest API. First, to be clear, The Echo Nest API isn’t going to go away. We are committed to continuing to support our open API. Second, although we haven’t sorted through all the details, you can imagine that there’s a whole lot of data that Spotify has that we can potentially use to enhance our API.  Perhaps the first visible change you’ll see in The Echo Nest API as a result of our becoming part of Spotify is that we will be able to keep our view of the Spotify ID space in our Project Rosetta Stone ID mapping layer incredibly fresh. No more lag between when an item appears in Spotify and when its ID appears in The Echo Nest.

The Echo Nest and the Spotify APIs are complementary. Spotify’s API provides everything a developer needs to play music and facilitate interaction with the listener, while The Echo Nest provides deep data on the music itself — what it sounds like, what people are saying about it, similar music its fans should listen to, and too much more to mention here. These two APIs together provide everything you need to create just about any music app you can think of.

I am a longtime fan of Spotify. I’ve been following them now for over seven years. I first blogged about Spotify way back in January of 2007 while they were still in stealth mode. I blogged about the Spotify haircuts, and their serious demeanor:

Those Crazy Spotify Guys in 2007

Those Crazy Spotify Guys in 2007

I blogged about the Spotify application when it was released to private beta (“Woah – Spotify is pretty cool”), and continued to blog about them every time they added another cool feature.

Last month, on a very cold and snowy day in Boston, I sat across a conference table from of bunch of really smart folks with Swedish accents. As they described their vision of a music platform, it became clear to me that they share the same obsession that we do with trying to find the best way to bring fans and music together.

Together, we can create the best music listening experience in history.

I’m totally excited to be working for Spotify now. Still, as with any new beginning, there’s an ending as well. We’ve worked hard, for many years, to build The Echo Nest — and as anyone who’s spent time here knows, we have a very special culture of people totally obsessed with building the best music platform. Of course this will continue, but as we become part of Spotify things will necessarily change, and the time when The Echo Nest was a scrappy music startup when no one in their right mind wanted to be a music startup will be just a sweet memory. To me, this is a bit like graduation day in high school — it is exciting to be moving on to bigger things, but also sad to say goodbye to that crazy time in life.

There is a big difference: When I graduated from high school and went off to college, I had to say goodbye to all my friends, but now as I go off to the college of Spotify, all my friends and classmates from high school are coming with me. How exciting is that!

en_logo_250x200_ltspotify-logo-primary-vertical-light-background-rgb

, , ,

14 Comments

Anti-preferences in regional listening

In previous posts, we’ve seen that different regions of the country can have different listening preferences. So far we’ve looked at the distinctive artists in any particular region. Perhaps equally interesting is to look at artists that get much fewer listens in a particular region than you would expect. These are the regional anti-preferences, the artists that are generally popular across the United States, but get much less love in a particular part of the country.

To find these artists, we merely look for artists that drop the furthest in rank on the top-most-played chart for a region when compared to the whole U.S.  For example, we can look at the top 50 artists in the United States, and find those artists of the 50 that drop furthest in rank on the New Hampshire chart. Try it yourself.  Here are the results:

Artists listened to more in United States than they are in New Hampshire

# Artist Rank in
United States
Rank in
New Hampshire
Delta
1 R. Kelly 42 720 -678
2 2Pac 45 243 -198
3 Usher 46 205 -159
4 Coldplay 36 155 -119
5 Chris Brown 37 120 -83

 R. Kelly is ranked the 42nd most popular artist in the U.S., but in New Hampshire he’s the 720th most popular, a drop of 678 positions on the chart making him the most ignored artist in New Hampshire.

We can do this for each of the states in the United States and of course we can put them on a map. Here’s a map that shows the most ignored artist of the U.S. top-50 artist in each state.

most-ignored-artists-2

What can we do with this information? If we know where a music listener lives, but we know nothing else about them, we can potentially improve their listening experience by giving them music based upon their local charts instead of the global or national charts.  We can also improve the listening even if we don’t know where the listener is from. As we can see from the map, certain artists are polarizing artists, liked in some circles and disliked in others.  If we eliminate the polarizing artists for a listener that we know nothing about, we can reduce the risk of musically offending the listener. Of course, once we know a little bit about the music taste of a listener we can greatly improve their recommendations beyond what we can do based solely on demographic info such as the listener location.

Future work There are a few more experiments that I’d like to try with regard to exploring regional preferences. In particular I think it’d be fun to generate an artist similarity metric based solely on regional listening behaviors. In this world, Juicy J, the southern rapper, and Hillsong United, the worship band would be very similar since they both get lots of listens from people in Memphis. A few readers have suggested alternate scoring algorithms to try, and of course it would be interesting to repeat these experiments for other parts of the world.  So much music data, so little time! However, this may be the last map I make for a while since the Internet must be getting sick of ‘artists on a map’ by now.

Credit and thanks to Randal Cooper (@fancycwabs) for creating the first set of anti-preference maps. Check out his blog and the Business Insider article about his work.

The data for the map is drawn from an aggregation of data across a wide range of music services powered by The Echo Nest and is based on the listening behavior of a quarter million online music listeners.

7 Comments

Follow

Get every new post delivered to your Inbox.

Join 718 other followers