Archive for category code
The new Spotify Web API allows the developer to create and add tracks to a playlist on behalf of a listener. This is a pretty powerful feature, opening the door for a whole range of apps. For instance, this weekend, I added the ability to save a Roadtrip Mixtape playlist, so you can now actually take your mixtapes on the road. The code is on github if you are interested in seeing how it is done.
I am wearing my International Executive Music Hacker hat today. I’m writing this blog post at 5AM somewhere over the Atlantic Ocean, on my way to the Barcelona Music Hack Day, where I’ll be representing both The Echo Nest and Spotify. I’m pretty excited about the hack event – first, because it’s in freaking Barcelona, and second, because I get to talk about what’s been going on with the Spotify and Echo Nest APIs.
It has been just about 100 days since The Echo Nest and Spotify have joined forces. In that time we’ve been working hard to build the best music platform for listeners and for developers. This week we are releasing some of the very first fruits of our labors.
First up, we are releasing a new Spotify Web API.
This is a complete revamp of the Spotify Metadata API (the old version has now been deprecated). The Spotify Web API gives you access to all sorts of information about the Spotify catalog including details about artists, albums and tracks. Want to know the top tracks for an artist? There’s an API for that. Looking for high quality album art, artist images and 30 second audio previews? There are APIs for that too. Best of all, the new API includes perhaps the most requested Spotify API feature of all time With the Spotify Web API you can now create and modify playlists on behalf of authenticated users. Yes – you can now create a Spotify web app that creates playlists. (I personally requested this feature way back in 2008, here’s my begging plea for the feature in 2009).
I’ve been using the beta version of this new API for a couple months now and I must say I am quite impressed. The API is fast, super easy to use, and provides all sorts of great data for building apps. In the past weeks I’ve had fun converting a number of my favorite apps to use the Spotify API. First there’s the Road Trip Mix Tape that lets you create a Spotify playlist of music by artists that are from the very towns you are driving through. Then there’s Music Popcorn, a visual interface for exploring genres. For the less visual, there’s the Genre Browser that gives you lots of details about the different music genres including playlists that help give you a gentle introduction to any of the thousands of Echo Nest genres. Next there’s Boil the Frog, an app that creates seamless playlists between any two artists. Finally there’s the 3D Music Maze, an app that lets you explore for music by wandering through a 3 dimensional music world.
Next up, a freshly minted Echo Nest + Spotify Sandbox — a new Spotify ID space.
These apps are possible because of the second thing we are releasing this week – a spiffy, shiny new Spotify Rosetta Stone catalog that ensures that the Echo Nest API has the freshest, and most up-to-date view of the Spotify universe of music. For those who might be new to The Echo Nest, Project Rosetta Stone is something we’ve been working on here at the Nest for many years. The goal of Project Rosetta Stone is to solve one of the most common problems that nearly every music app developer faces. The problem is that every music service has its own set of IDs – a music subscription service like Spotify has its own artist, album and track IDs. A lyric service has its own (and very different) IDs for those same artists, albums and tracks and a concert ticketing API has yet a third set of IDs. This is quite problematic for app developers that want to build an app that combines information from multiple services. Without a common ID system, the app developer has to resort to metadata searching and matching – which is slow and quite error prone – this results in a poor app.
Project Rosetta Stone solves this problem by providing ID mappings between as many music services as we can. With this mapping you can easily translate IDs from one ID space to another. With Rosetta Stone, if you have the Spotify track ID you can get Lyricfind and/or Musixmatch IDs making it easy to use those respective APIs to retrieve lyrics for that song. You can easily map the Spotify artist ID to a Songkick or Eventful ID to get ticket and touring information from those APIs. And of course you can use the Spotify track ID to get detailed Echo Nest information about the song such as its tempo, energy, danceability, along with detailed Echo Nest artist data such as latest artist news, blog posts and similar artists.
We have had Spotify IDs in Rosetta Stone for many years, but this particular mapping has in the past been problematic for us. Spotify has a huge catalog and keeping the mapping fresh and up to date between Spotify and The Echo Nest has always been a big challenge. There’s a huge back catalog with millions of tracks to deal with plus thousands of new tracks are being added every week. The result was that there was always a bit of a lag between when updates to the Spotify catalog were reflected in the Rosetta Stone mapping. This meant that if you built a Rosetta Stone-based app you could find that The Echo Nest wouldn’t always know about a Spotify track, especially if a track was very new. The result would be a less-than-perfect app.
This week we are releasing a new Spotify ID space. Our engineers have been working hard over the last 100 days to set up all sorts of infrastructure and plumbing to ensure that we have the most up-to-date view of the Spotify catalog. No more lag between when a new track appears in Spotify and when you can get Echo Nest data. Plus, all of our APIs that take IDs as inputs will now also take Spotify IDs as input as well. If you have a Spotify artist ID you can use it with any Echo Nest artist API method. Likewise, if you have a Spotify track ID you can use it with any Echo Nest song or track API method that takes a track ID as input. This makes it **really** easy for developers to use The Echo Nest and Spotify Apps together. For example, here’s an API call that returns detailed audio properties for a Spotify track given its ID.
I’ve been having much fun using The Echo Nest API with the brand new Spotify API. I’ve already written some code that you can use. First, I wrote a Python library for Spotify called Spotipy. It’s makes it easy to write Python programs that use the new Spotify Web API, and it works well with my Echo Nest Python library called Pyen. Here’s an example of using the two libraries together:
So yes, I’m pretty jazzed about this trip to Barcelona. I get to create a music hack, I get to spend a few days with some of the best music hackers in the world (The Barcelona Music Hack Day, as part of the Sonar Festival tends to attract the top music hackers). I get to spend a few days on the Mediterranean in one of the most beautiful cities in the world. Best of all, I get to talk about the new Spotify and Echo Nest developer platform and help music hackers build cool stuff on top of the newly combined platform.
I’ve put together a page that talks in detail about the new Spotify / Echo Nest platform. It has links to all of the API docs, libraries, examples, github repos, demos and details on how you can use The Echo Nest / Spotify Platform. Check it out here:
Keep an eye on this space for I’ll be updating it as we continue to integrate our developer APIs. There’s lots more coming, so stay tuned!
Rumor has it from some of the Echo Nest gang that went to Stockholm last week for new employee orientation that there is some sort of mandatory Karaoke requirement. Now for some, I’m sure this is great fun, but for others, like myself, not so much. I thought it would be best to prepare for my own mandatory Karaoke by finding some very short songs in order to minimize my time on stage. To do this I went through a database of the top Billboard songs of the last 60 years to find the shortest songs. Here are some of the top shortest popular songs of the last 60 years:
|76||Anna Kendrick Cups||2013-01-14|
|78||Zac Efron What I’ve Been Looking For (Reprise)||2006-02-13|
|83||Buchanan & Goodman Santa And The Satellite (Part I)||1957-12-25|
|92||Audrey Dear Elvis (Page 1)||1956-09-24|
|96||Fats Domino Whole Lotta Loving||1958-11-19|
|98||Glee Cast Isn’t She Lovely||2011-05-30|
|99||Maurice Williams & The Zodiacs Stay||1960-10-05|
|101||Swinging Blue Jeans, The Hippy Hippy Shake||1964-03-09|
|103||Peter, Paul & Mary Settle Down (Goin’ Down That Highway)||1963-01-21|
|105||Four Tops Ain’t That Love||1965-08-02|
|105||Fats Domino Shu Rah||1961-03-22|
|105||Chuck Berry Let It Rock||1960-02-03|
|107||Lucas Gabreel & Ashley Tisdale Bop To The Top||2006-02-13|
|107||Beach Boys, The Little Deuce Coupe||1963-08-19|
|107||Clyde McPhatter Lover Please||1962-03-05|
|108||Ventures, The Hawaii Five-O||1969-03-10|
|110||Glee Cast Sing!||2010-11-01|
|110||Glee Cast It’s My Life / Confessions Part II||2009-10-26|
|110||Ricky Nelson If You Can’t Rock Me||1963-04-22|
So it looks like my minimum possible karaoke pain will be 76 seconds if I go with Anna Kendrick’s Cups. Certainly better than Gun’s in Roses November Rain at 8:57 seconds or Don Mclean’s American Pie at 6:49. But better yet, I can go with Hawaii Five-O . That song is not only short, but has no vocals. With that song I’m sure to be pitch perfect!
On the very same day that Spotify announced its acquisition of The Echo Nest they released a brand new Spotify iOS SDK. Trying this new SDK out has been high on my priority list, and finally after a few crazy weeks I’ve had a bit of time to take it for a test drive. I walked through the beginner’s tutorial and was up and running with an iOS app running in the simulator in about 30 minutes. Easy Peazy! The bit that took the longest was setting up the token exchange service. This is a service that you need to run on your own server as part of the authentication process. The tutorial provides such a sample service written in ruby, however I’m not a ruby programmer so I had to go through all the gyrations of installing ruby, figuring out how to install gems and getting the required gems installed. Once I had everything installed it worked fine and I was able to get the tutorial running. However, I figure that I’ll be working with the iOS SDK a great deal in my future, and I’d rather not have to deal with a ruby server every time I create a new app, and so for my Sunday morning programming project I’ve re-written the ruby swap service in python. The service is on github here: spotify_token_swap
If you are going to be using the new Spotify iOS SDK to create apps and you’d rather deal with python than ruby, then you might find it useful.
I work for Spotify now – so for my Sunday morning programming project I thought I’d write a simple Spotify App that uses The Echo Nest API to create playlists based upon a seed song. I’ve done this before, but the last time was a few years ago and the Spotify Apps API has changed quite a bit since then, so I thought I’d use this as an opportunity to freshen my understanding of the Spotify API as well as to demonstrate how to write a Spotify App that uses The Echo Nest API.
It did take me a bit of time to get the hang of the newer Spotify Apps API. It has changed quite a bit since I last used it and many of the examples that I relied on in the past, like Peter Watt’s excellent Kitchen Sink app, use an older version of the API. The new version has some significant changes including a nifty new module system along with a new approach to managing long-running queries that relies on promises. Once I got the hang of it, I decided that I like the new version – it makes for cleaner code and a much more efficient app since much less data needs to be moved around.
The app is on github – to use it you need to sign up for a developer account on Spotify and follow the rest of the Getting Started instructions (this means if you are not a developer, you’ll probably not be able to use the app).
The Spotify Apps API makes it super easy to be able to create apps that run inside Spotify. Its a very familiar programming environment for anyone who has done web programming. You can use all of your favorite libraries from jQuery to Lo-Dash to create really compelling apps that sit on top of the millions and millions of tracks in the Spotify catalog. However, unlike a web app where anyone can publish their app on the web, to publish a Spotify App you have to submit your app to the Spotify App approval process and only apps that Spotify approves are published. Spotify sets a high bar for what ultimately gets approved – which keeps the quality of the apps high, but also means that hacks and experiments built on the Spotify Apps platform will likely never be approved for release to the general public. It’s a difficult balancing act for Spotify. They’ve built the ultimate music hacking platform with this API, but if they open the doors to every music hack, they will ultimately dilute the listening experience of the user – like so other many App stores that are flooded with garbage apps, if they publish every app and hack then Spotify listeners would be inundated with the musical equivalent of flashlight and fart apps. With the approval process, Spotify essentially says ‘the listener comes first’ which is the right choice. Still, as a music hacker I do wish it was easier to publish rich music apps built on the Spotify platform. Luckily Spotify is committed to building an active and vibrant developer ecosystem so I don’t expect they we will be standing still.
Update 3/24/14: – I’ve added the ability to save these playlists back to Spotify, so you can take the Echo Nest radio playlists with you.
Second update 3/24/14 – note that Spotify’s recent announcement that they are closing app submissions means that you won’t be able to submit apps for publishing anymore, but you should be able to still create your own.
Last week at the SXSW Music Hack Championship hackathon I built The Autocanonizer. An app that tries to turn any song into a canon by playing it against a copy of itself. In this post, I explain how it works.
At the core of The Autocanonizer are three functions – (1) Build simultaneous audio streams for the two voices of the canon (2) Play them back simultaneously, (3) Supply a visualization that gives the listener an idea of what is happening under the hood. Let’s look at each of these 3 functions:
(1A) Build simultaneous audio streams – finding similar sounding beats
The goal of the Autocanonizer is to fold a song in on itself in such a way that the result still sounds musical. To do this, we use The Echo Nest analyzer and the jremix library to do much of the heavy lifting. First we use the analyzer to break the song down into beats. Each beat is associated with a timestamp, a duration, a confidence and a set of overlapping audio segments. An audio segment contains a detailed description of a single audio event in the song. It includes harmonic data (i.e. the pitch content), timbral data (the texture of the sound) and a loudness profile. Using this info we can create a Beat Distance Function (BDF) that will return a value that represents the relative distance between any two beats in the audio space. Beats that are close together in this space sound very similar, beats that are far apart sound very different. The BDF works by calculating the average distance between overlapping segments of the two beats where the distance between any two segments is a weighted combination of the euclidean distance between the pitch, timbral, loudness, duration and confidence vectors. The weights control which part of the sound takes more precedence in determining beat distance. For instance we can give more weight to the harmonic content of a beat, or the timbral quality of the beat. There’s no hard science for selecting the weights, I just picked some weights to start with and tweaked them a few times based on how well it worked. I started with the same weights that I used when creating the Infinite Jukebox (which also relies on beat similarity), but ultimately gave more weight to the harmonic component since good harmony is so important to The Autocanonizer.
(1B) Build simultaneous audio streams – building the canon
The next challenge, and perhaps biggest challenge of the whole app, is to build the canon – that is – given the Beat Distance Function, create two audio streams, one beat at a time, that sound good when played simultaneously. The first stream is easy, we’ll just play the beats in normal beat order. It’s the second stream, the canon stream that we have to worry about. The challenge: put the beats in the canon stream in an order such that (1) the beats are in a different order than the main stream, and (2) they sound good when played with the main stream.
The first thing we can try is to make each beat in the canon stream be the most similar sounding beat to the corresponding beat in the main stream. If we do that we end up with something that looks like this:
It’s a rat’s nest of connections, very little structure is evident. You can listen to what it sounds like by clicking here: Experimental Rat’s Nest version of Someone Like You (autocanonized). It’s worth a listen to get a sense of where we start from. So why does this bounce all over the place like this? There are lots of reasons: First, there’s lots of repetition in music – so if I’m in the first chorus, the most similar beat may be in the second or third chorus – both may sound very similar and it is practically a roll of the dice which one will win leading to much bouncing between the two choruses. Second – since we have to find a similar beat for every beat, even beats that have no near neighbors have to be forced into the graph which turns it into spaghetti. Finally, the underlying beat distance function relies on weights that are hard to generalize for all songs leading to more noise. The bottom line is that this simple approach leads to a chaotic and mostly non-musical canon with head-jarring transitions on the canon channel. We need to do better.
There are glimmers of musicality in this version though. Every once in a while, the canon channel will remaining on a single sequential set of beats for a little while. When this happens, it sounds much more musical. If we can make this happen more often, then we may end up with a better sounding canon. The challenge then is to find a way to identify long consecutive strands of beats that fit well with the main stream. One approach is to break down the main stream into a set of musically coherent phrases and align each of those phrases with a similar sounding coherent phrase. This will help us avoid the many head-jarring transitions that occur in the previous rat’s nest example. But how do we break a song down into coherent phrases? Luckily, it is already done for us. The Echo Nest analysis includes a breakdown of a song into sections – high level musically coherent phrases in the song – exactly what we are looking for. We can use the sections to drive the mapping. Note that breaking a song down into sections is still an open research problem – there’s no perfect algorithm for it yet, but The Echo Nest algorithm is a state-of-the-art algorithm that is probably as good as it gets. Luckily, for this task, we don’t need a perfect algorithm. In the above visualization you can see the sections. Here’s a blown up version – each of the horizontal colored rectangles represents one section:
You can see that this song has 11 sections. Our goal then is to get a sequence of beats for the canon stream that aligns well with the beats of each section. To make things at little easier to see, lets focus in on a single section. The following visualization shows the similar beat graph for a single section (section 3) in the song:
You can see bundles of edges leaving section 3 bound for section 5 and 6. We could use these bundles to find most similar sections and simply overlap these sections. However, given that sections are rarely the same length nor are they likely to be aligned to similar sounding musical events, it is unlikely that this would give a good listening experience. However, we can still use this bundling to our advantage. Remember, our goal is to find a good coherent sequence of beats for the canon stream. We can make a simplifying rule that we will select a single sequence of beats for the canon stream to align with each section. The challenge, then, is to simply pick the best sequence for each section. We can use these edge bundles to help us do this. For each beat in the main stream section we calculate the distance to its most similar sounding beat. We aggregate these counts and find the most commonly occurring distance. For example, there are 64 beats in Section 3. The most common occurring jump distance to a sibling beat is 184 beats away. There are ten beats in the section with a similar beat at this distance. We then use this most common distance of 184 to generate the canon stream for the entire section. For each beat of this section in the main stream, we add a beat in the canon stream that is 184 beats away from the main stream beat. Thus for each main stream section we find the most similar matching stream of beats for the canon stream. This visualizing shows the corresponding canon beat for each beat in the main stream.
This has a number of good properties. First, the segments don’t need to be perfectly aligned with each other. Note, in the above visualization that the similar beats to section 3 span across section 5 and 6. If there are two separate chorus segments that should overlap, it is no problem if they don’t start at the exactly the same point in the chorus. The inter-beat distance will sort it all out. Second, we get good behavior even for sections that have no strong complimentary section. For instance, the second section is mostly self-similar, so this approach aligns the section with a copy of itself offset by a few beats leading to a very nice call and answer.
That’s the core mechanism of the autocanonizer – for each section in the song, find the most commonly occurring distance to a sibling beat, and then generate the canon stream by assembling beats using that most commonly occurring distance. The algorithm is not perfect. It fails badly on some songs, but for many songs it generates a really good cannon affect. The gallery has 20 or so of my favorites.
(2) Play the streams back simultaneously
When I first released my hack, to actually render the two streams as audio, I played each beat of the two streams simultaneously using the web audio API. This was the easiest thing to do, but for many songs this results in some very noticeable audio artifacts at the beat boundaries. Any time there’s an interruption in the audio stream there’s likely to be a click or a pop. For this to be a viable hack that I want to show people I really needed to get rid of those artifacts. To do this I take advantage of the fact that for the most part we are playing longer sequences of continuous beats. So instead of playing a single beat at a time, I queue up the remaining beats in the song, as a single queued buffer. When it is time to play the next beat, I check to see if is the same that would naturally play if I let the currently playing queue continue. If it is I ‘let it ride’ so to speak. The next beat plays seamlessly without any audio artifacts. I can do this for both the main stream and the canon stream. This virtually elimianates all the pops and clicks. However, there’s a complicating factor. A song can vary in tempo throughout, so the canon stream and the main stream can easily get out of sync. To remedy this, at every beat we calculate the accumulated timing error between the two streams. If this error exceeds a certain threshold (currently 50ms), the canon stream is resync’d starting from the current beat. Thus, we can keep both streams in sync with each other while minimizing the need to explicitly queue beats that results in the audio artifacts. The result is an audio stream that is nearly click free.
(3) Supply a visualization that gives the listener an idea of how the app works
I’ve found with many of these remixing apps, giving the listener a visualization that helps them understand what is happening under the hood is a key part of the hack. The first visualization that accompanied my hack was rather lame:
It showed the beats lined up in a row, colored by the timbral data. The two playback streams were represented by two ‘tape heads’ – the red tape head playing the main stream and the green head showing the canon stream. You could click on beats to play different parts of the song, but it didn’t really give you an idea what was going on under the hood. In the few days since the hackathon, I’ve spent a few hours upgrading the visualization to be something better. I did four things: Reveal more about the song structure, show the song sections, show, the canon graph and animate the transitions.
Reveal more about the song
The colored bars don’t really tell you too much about the song. With a good song visualization you should be able to tell the difference between two songs that you know just by looking at the visualization. In addition to the timbral coloring, showing the loudness at each beat should reveal some of the song structure. Here’s a plot that shows the beat-by-beat loudness for the song stairway to heaven.
You can see the steady build in volume over the course of the song. But it is still less than an ideal plot. First of all, one would expect the build for a song like Stairway to Heaven to be more dramatic than this slight incline shows. This is because the loudness scale is a logarithmic scale. We can get back some of the dynamic range by converting to a linear scale like so:
This is much better, but the noise still dominates the plot. We can smooth out the noise by taking a windowed average of the loudness for each beat. Unfortunately, that also softens the sharp edges so that short events, like ‘the drop’ could get lost. We want to be able to preserve the edges for significant edges while still eliminating much of the noise. A good way to do this is to use a median filter instead of a mean filter. When we apply such a filter we get a plot that looks like this:
The noise is gone, but we still have all the nice sharp edges. Now there’s enough info to help us distinguish between two well known songs. See if you can tell which of the following songs is ‘A day in the life’ by The Beatles and which one is ‘Hey Jude’ by The Beatles.
Show the song sections
As part of the visualization upgrades I wanted to show the song sections to help show where the canon phrase boundaries are. To do this I created a the simple set of colored blocks along the baseline. Each one aligns with a section. The colors are assigned randomly.
Show the canon graph and animate the transitions.
To help the listener understand how the canon is structured, I show the canon transitions as arcs along the bottom of the graph. When the song is playing, the green cursor, representing the canon stream animates along the graph giving the listener a dynamic view of what is going on. The animations were fun to do. They weren’t built into Raphael, instead I got to do them manually. I’m pretty pleased with how they came out.
All in all I think the visualization is pretty neat now compared to where it was after the hack. It is colorful and eye catching. It tells you quite a bit about the structure and make up of a song (including detailed loudness, timbral and section info). It shows how the song will be represented as a canon, and when the song is playing it animates to show you exactly what parts of the song are being played against each other. You can interact with the vizualization by clicking on it to listen to any particular part of the canon.
Wrapping up – this was a fun hack and the results are pretty unique. I don’t know of any other auto-canonizing apps out there. It was pretty fun to demo that hack at the SXSW Music Hack Championships too. People really seemed to like it and even applauded spontaneously in the middle of my demo. The updates I’ve made since then – such as fixing the audio glitches and giving the visualization a face lift make it ready for the world to come and visit. Now I just need to wait for them to come.
I’ve spent the last 24 hours at the SXSW Music Hackathon championship. For my hack I’ve built something called The Autocanonizer. It takes any song and tries to make a canon out of it. A canon is a song that can be played against a copy of itself. The Autocanonizer does this by looking at the detailed audio in the song (via The Echo Nest analysis), and looks for parts of the song that might overlap well. It builds a map of all these parts and when it plays the song it plays the main audio, while overlapping it with the second audio stream. It doesn’t always work, but when it does, the results can be quite fun and sometimes quite pleasing.
To go along with the playback I created a visualization that shows the song and the two virtual tape heads that are playing the song. You can click on the visualization to hear particular bits.
There are some audio artifacts on a few songs still. I know how to fix it, but it requires some subtle math (subtraction) that I’m sure I’ll mess up right before the hackathon demo if I attempt it now, so it will have to wait for another day. Also, there’s a Firefox issue that I will fix in the coming days. Or you can go and fix all this yourself because the code is on github.