Archive for category The Echo Nest
My latest music hack is Bangarang Boomerang. It is a web app (runs in Chrome or the latest Safari), that lets you ‘drive’ the Skrillex song. You can freeze-frame the song on a beat, you can make the song go backwards beat by beat, you can advance through the song at double time, or triple time, and set bookmarks to let you easily jump to different sections of the song. It is a rather fun app that lets you feel like a musician, even if you have very little musical talent.
Watch the quick Youtube demo, and then try it yourself: Bangarang Boomerang
A bit more coding this weekend on ‘Hear Here’ my iPhone app that plays music by nearby artists. It is now feature complete. The list of features is rather small – it really is a ‘do one thing well’, kind of app. It plays music by the nearest artists that match your filter. You can filter currently by the popularity of the artist. If you are adventurous, you can listen to music by all nearby artists, but if you are not so brave you can just listen to music by mainstream or popular artists. The app shows you how far away the ‘now playing’ artist is and shows you how many artists are within a 25 mile radius. All music is streamed from Rdio and of course you’ll need an Rdio subscription to hear full streams. I made my own icon – it is pretty ugly – if you have design skills and want to contribute a logo I’d be very pleased to use it. Here’s a video of the app in action for a user who happens to be in Cupertino:
Next steps for the app are lots of testing, especially with poor network connectivity. After that, I’ll make sure I’m following all the rules for Rdio and Apple – and once I’m conforming to all the TOS’s and UI guidelines I’ll submit it to the App Store (as a free app).
You can usually learn something about a person by looking at what music they listen to. Someone who listens to the Sex Pistols and the Ramones is likely to be from a very different demographic than someone whose favorite artist is Julie Andrews. Of course there are always exceptions to the rule – there are probably a few playlists out there in the world that have both “Anarchy in the UK” and “My Favorite things” but I’m quite sure you won’t be finding a mosh pit at a Julie Andrews concert any time soon.
As we collect more data about what people listen to we begin to learn more about the demographics of listening. Who really listens to Country music? Are they really mostly right-leaning southerners? Are all Hanson fans now 30 years old? To learn how we can answer some of these questions be sure to read Echo Nest founder Brian Whitman’s latest post on Variogr.am about the kinds of predictions we can make about people based upon what they listen to.
This week, The Echo Nest is releasing some new API features that make it easy for developers to build apps that take advantage of this listening data. One new API is Taste Profile Similarity. This API lets you take a seed taste profile (a taste profile is how The Echo Nest represents an individual’s music taste) and find other taste profiles that are similar to that seed. To demonstrate one type of application you can build with this new similarity API, we’ve created a web app called ”What’s your stereotype?” This application will look at your music taste (based on your Facebook likes, or your jams from This is my Jam), and tell you which Internet meme best fits your listening style.
Yes, the app will pigeonhole you into a narrow, and probably demeaning demographic. You will probably be offended. Here’s my musical stereotype:
If you want to have your own music tastes pigeonholed like this can try the app yourself at What’s your stereotype? Just remember, you will probably be offended.
To create this app, we identified a whole bunch of Internet memes and personas and made some predictions about the type of music each of these personas would listen to. We then look at the music taste similarity between you and each of the personas – the closest matching one becomes your musical stereotype.
The hardest part about building this app was identifying all of the appropriate Internet memes, predicting the music taste for each meme, collecting images, links and attribution, and most challenging of all, writing the witticisms that accompany each meme. Leading this effort was Matthew Santiago, our chief data quality guy here at The Echo Nest. Matthew organized the meme-dream team to collect and massage all this data. Our highly creative meme-dream team includes Michelle, Nell, Charlie, Alyse, Ryan, Sonja, Nicola, Sam, Roisin, Julie, Sara and Alex.
This app demonstrates what we can do with just a little bit of data about your music tastes. Using the techniques that Brian describes coupled with all the deep data we are gathering around listening habits will help us get a much deeper understanding of your music tastes. This understanding will be key to helping us craft the best music listening experience for you. So, go check out the What’s your stereotype? . I hope you’ll have as much fun with the app as we had in building it.
Over the last few years I’ve made a number of 1,000+ mile road trips as I shuttle kids to colleges in far away places. Listening to music has always been a big part of these trips. I thought it’d be nice to be able to listen to music by local artists when driving through a particular region, so I spent a few weekends creating an app called Roadtrip Mixtape that populates a roadtrip playlist with artists that are from the region you are driving through.
To create a playlist, type your starting and ending cities for your roadtrip. The app will use Google’s directions to plan the best route between the two cities. The route will then be broken into 15 minute playlist legs. Each playlist leg is populated by 15 minutes worth of music by nearby artists.
The beginning of each leg is represented by a green ball. You can click on the ball to see what artists will be played during that leg. The app plays music via Rdio using their nifty Web Player API. If you are an Rdio subscriber you can listen to full streams, and if not you get to hear 30 second samples. One bit of interesting info that I show for a route is the ‘Avg distance’. This shows the average distance to each artist on the roadtrip. If this number is low, you are traveling through a musically dense part of the world, and if it is high, you are traveling in a sparse musical region. For instance, for a roadtrip from Boston to New York the average artist distance is 3 miles (about as low as it goes). However, if you are traveling from Omaha to Denver, the average artist distance is 81 miles.
You can also click anywhere on the map to see and listen to nearby artists. For example, if you click on Shreveport you’ll see something like this:
When you click the ‘Hear here’ button, you’ll get a playlist of the hotttest artists from Shreveport.
Listening to nearby artists is quite fun. There’s potential from some extreme sonic whiplash as you drive near a brutal death metal band and then a pop vocalist from the 1950s
The Technical Bits
To build the app I used the new artist location data from The Echo Nest. This (still in beta) feature, allows you to retrieve the location of any artist. Here’s an example API call that retrieves the artist location for Radiohead:
For this app, I collected the locations for the top 100,000 or so most popular artists in the Rdio catalog. These artists were from about 15,000 different cities. I used geopy along with the Yahoo Placefinder geocoder to find the latitude and longitude for each of these cities. For the mapping and route finding, I used version 3.9 of the Google maps API. For music playback I used the Rdio Web Playback API. With the tight integration between the Echo Nest and Rdio ID spaces it was easy to go from a geolocated Echo Nest artist to a list of Rdio track IDs for songs by that artist.
The Bad Bits
As a web app that relies on the flash-based Rdio web player, Roadtrip Mixtape is not really a mobile app. It won’t play music on an iPhone or iPad, so the best way to actually use this app on the road is probably to bring along your tethered laptop. Not the best user experience. Thus, my next weekend project will be to learn a little bit of iOS programming a make a version of this app that runs on an iPhone and an iPad. Stay tuned for the next version.
There are many cities in the United States that are known for their music. Cities like Nashville, Detroit, Seattle and New Orleans have played a major part in the musical history and development of this country. But what is the most musical city? Which city has spawned the most musical artists? To answer this question I used the soon-to-be-released artist location data from The Echo Nest artist API. I gathered up the top 50,000 or so U.S. artists, found their city of origin and tallied the number of artists per city. From this tally I calculated the number of artists per 1,000 inhabitants in each city. The more artists per 1000 inhabitants, the more musical the city.
Using the artists per 1k inhabitants, we can easily find the top 25 most musical cities in the United States:
|#||Artists per 1,000 inhabitants||Artists||Population||City|
|1||3.14||111||35355||Beverly Hills, CA|
|2||2.26||1651||732072||San Francisco, CA|
|11||1.24||4789||3877129||Los Angeles, CA|
|12||1.22||15||12314||Muscle Shoals, AL|
|16||1.05||50||47529||Chapel Hill, NC|
I find the results to be pretty interesting. Beverly Hills, the tiny city at the heart of the entertainment world is #1. San Francisco is the most musical of all large cities, followed closely by Nashville. Among, the most musical of small cities is Muscle Shoals AL which, according to Wikipedia, is famous for its contributions to American popular music. Less musical than expected are New Orleans (rank 36), NYC (rank 37), Detroit (rank 52).
Among the least musical cities in the U.S. are my hometown (Manchester NH), with only one artist in the top 50,000 U.S. based artist for the 100K inhabitants. The least musical large city in the U.S. is Kansas City KS, with only 7 top-50k artists for their nearly half million inhabitants. Luckily Kansas City residents can drive a few miles to Kansas city Missouri (with its 194 musicians for its 442k inhabitants) when they get tired of their own seven artists.
You can see the full list of cities with population greater than 5,000 ordered by their musicality here: The Most Musical Cities in the United States. I’d love to do this for all the cities in the world, but I can’t find a good source of city population data for world cities. If you know of one let me know.
I’m rather exited about this upcoming release of artist location data in our API. It will open the doors for a whole bunch of interesting applications, such as road trip playlisters that play music by artists local to the city you are near, contextual playlisters that will favor artists from your home town, or music exploration apps that will let you explore music from a particular region of the world. I can’t wait to see what people build with this data. Stay tuned, I’ll post when the API is released.
With the recently announced Spotify integration into Rosetta Stone, The Echo Nest now makes available a detailed audio analysis for millions of Spotify tracks. This audio analysis includes summary features such as tempo, loudness, energy, danceability, key and mode, as well as a set of fine-grained segment features that describe details such as where each bar, beat and tatum fall and the detailed pitch, timbral and loudness content of each audio event in the song. These features can be very useful for driving Spotify applications that need to react to what the music sounds like – from advanced dynamic music visualizations like the MIDEM music machine or synchronized music games like Guitar Hero.
I put together a little Spotify App that demonstrates how to synchronize Spotify Playback with the Echo Nest analysis. There’s a short video here of the synchronization:
video on youtube: http://youtu.be/TqhZ2x86RXs
In this video you can see the audio summary for the currently playing song, as well as a display synchronized ‘bar’ and ‘beat’ labels and detailed loudness, timbre and pitch values for the current segment.
How it works:
To get the detailed audio analysis, call the track/profile API with the Spotify Track ID for the track of interest. For example, here’s how to get the track for Radiohead’s Karma Police using the Spotify track ID:
This returns audio summary info for the track, including the tempo, energy and danceability. It also includes a field called the analysis_url which contains an expiring URL to the detailed analysis data. (A very abbreviated excerpt of an analysis is contained in this gist).
To synchronize Spotify playback with the Echo Nest analysis we need to first get the detailed analysis for the now playing track. We can do this by calling the aforementioned track/profile call to get the analysis_url for the detailed analysis, and then retrieve the analysis (it is stored in JSON format, so no reformatting is necessary). There is one technical glitch though. There is no way to make a JSONP call to retrieve the analysis. This prevents you from retrieving the analysis directly into a web app or a Spotify app. To get around this issue, I built a little proxy at labs.echonest.com that supports a JSONP style call to retrieve the contents of the analysis URL. For example, the call:
will return the analysis json wrapped in the foo() callback function. The Echo Nest does plan to add JSONP support to retrieving analysis data, but until then feel free to use my proxy. No guarantees on support or uptime since it is not supported by engineering. Use at your own risk.
Once you have retrieved the analysis you can get the current bar, beat, tatum and segment info based upon the current track position, which you can retrieve from Spotify with: sp.getTrackPlayer().getNowPlayingTrack().position. Since all the events in the analysis are timestamped, it is straightforward to find a corresponding bar,beat, tatum and segment given any song timestamp. I’ve posted a bit of code on gist that shows how I pull out the current bar, beat and segment based on the current track position along with some code that shows how to retrieve the analysis data from the Echo Nest. Feel free to use the code to build your own synchronized Echo Nest/Spotify app.
The Spotify App platform is an awesome platform for building music apps. Now, with the ability to use Echo Nest analysis from within Spotify apps, it is a lot easier to build Spotify apps that synchronize to the music. This opens the door to a whole range of new apps. I’m really looking forward to seeing what developers will build on top of this combined Echo Nest and Spotify platform.
Last week The Echo Nest and Spotify announced an integration of APIs making it easy for developers to write Spotify Apps that take advantage of the deep music intelligence offered by the Echo Nest. The integration is via Project Rosetta Stone (PRS). PRS is an ID mapping layer in the API that allows developers to use the IDs from any supported music service with the Echo Nest API. For instance, a developer can request via the Echo Nest playlist API a playlist seeded with a Spotify artist ID and receive Spotify track IDs in the results.
This morning I created a Spotify App that demonstrates how to use the Spotify and Echo Nest APIs together. The app is a simple playlister with the following functions:
- Gets the artist for the currently playing song in Spotify
- Creates an artist radio playlist based upon the now playing artist
- Shows the playlist, allowing the user to listen to any of the playlist tracks
- Allows the user to save the generated playlist as a Spotify playlist.
makePlaylistFromNowPlaying() - grabs the current track from spotify and fetches and displays the playlist from The Echo Nest.
fetchPlayst() - The bulk of the work is done in the fetchPlaylist method. This method makes a jsonp call to the Echo Nest API to generate a playlist seeded with the Spotify artist. The Spotify Artist ID needs to be massaged slightly. In the Echo Nest world Spotify artist IDs look like ‘spotify-WW:artist:12341234′ so we convert from the Spotify form to the Echo Nest form with the one liner:
var artist_id = artist.uri.replace('spotify', 'spotify-WW');
Here’s the code:
The function createPlayButton creates a doc element with a clickable play image, that when clicked, calls the playSong method, which grabs the Spotify Track ID from the song and tells Spotify to play it:
Update: I was using a deprecated method of playing tracks. I’ve updated the code and example to show the preferred method (Thanks @mager).
When we make the playlist call we include a buckets parameter requesting that spotify IDs are returned in the returned tracks. We need to reverse the ID mapping to go from the Echo Nest form of the ID to the Spotify form like so:
Saving the playlist as a spotify playlist is a 3 line function:
Installing and running the app
To install the app, follow these steps:
- make sure you have a Spotify Developer Account
- Make a ‘playlister’ directory in your Spotify apps folder (On a mac this is in ~/Spotify/playlister)
- Get the project files from github
- Copy the project files into the ‘playlister’ directory. The files are:
- index.html – the app (html and js)
- manifest.json – describes your app to Spotify. The most important bit is the ‘RequiredPermissions’ section that lists ‘http://*echonest.com’. Without this entry, your app won’t be able to talk to The Echo Nest.
- js/jquery.min.js – jquery
- styles.css – minimal css for the app
- play.png – the image for the play button
- icon.png – the icon for the app
To run the app type ‘spotify:app:playlister’ in the Spotify search bar. The app should appear in the main window.
Well, that’s it – a Spotify playlisting app that uses the Echo Nest playlist API to generate the playlist. Of course, this is just the tip of the iceberg. With the Spotify/Echo Nest connection you can easily make apps that use all of the Echo Nest artist data: artist news, reviews, blogs, images, bios etc, as well as all of the detailed Echo Nest song data: tempo, energy, danceability, loudness, key, mode etc. Spotify has created an awesome music app platform. With the Spotify/Echo Nest connection, this platform has just got more awesome.
Tristan Jehan, one of the founders here at the Echo Nest, has created a Python script that will take a 4/4 song and turn it into a waltz. The script uses Echo Nest remix, a Python library that lets you algorithmically manipulate music. Here’s an example of the output of the script when applied to the song ‘Fame’:
Turning a 4/4 song into a 3/4 song while still keeping the song musical is no easy feat. But Tristan’s algorithm does a pretty good job. Here’s what he does:
- Start with a 4/4 measure
- Cut the 4/4 measure into 2 bars with 2 beats in each bar
- Stretch the first beat of each bar by 100%
- Adjust the tempo to a typical waltz tempo
Here’s a graphic that shows the progression:
Here are some more examples:
I recently gave a talk on Data Mining Music at SXSW. It was a standing room only session, with an enthusiastic audience that asked great questions. It was a really fun time for me. I’ve posted the slides to Slideshare, but be warned that there are no speaker notes so it may not always be clear what any particular slide is about. There was lots of music in the talk, but unfortunately, it is not in the Slideshare PDF. The links below should flesh out most of the details and have some audio examples.
- Have artist names been getting longer?
- The Passion Index - Find the bands that have the most passionate fans
- Six Degrees of Black Sabbath - Using artist relationship data to build a Six Degrees of Kevin Bacon for Music
- Frog-based playlisting - Building advanced playlists by finding paths through the artist space
- The Click Track Detector - Finding drummers that use a click track
- Looking for the Slow Build - Finding songs that have a gradual build
- Bohemian Rhapsichord - Turning a popular song into a musical instrument, with data.
- Midem Music Machine - Making a beautiful visualization of music
- The Swinger - Making any song swing
Thanks to everyone who attended.