Archive for category Music
What happens when you bring a fan on stage?
Sometimes a performer at a concert will bring a fan up on stage to sing or play along with the band. Most of the time this does not go very well. As American Idol has shown, the Dunning-Kruger effect seems to apply most frequently to budding musicians. It is quite likely that the fan that was just brought on stage may think that they are ready to perform in front of thousands, but in reality they don’t deserve to be there. When you ask a fan to sing your are most likely to get these results:
[youtube http://www.youtube.com/watch?v=a_QqfEYNRlc&feature=youtube_gdata_player]Still, sometimes, a fan has talent or a special spark, and those moments are magical. It is why bands like Green Day will often bring audience members up on stage to take part in the show. Law of averages apply. Sooner or later Billie Joe finds a winner. My favorite Green Day fan performance is this German guitar player playing Basket Case:
[youtube http://www.youtube.com/watch?v=V7fal-gFOIo&feature=youtube_gdata_player]Another band that brings fans on stage is Avenged Sevenfold. In this clip a kid gets on the drums and rocks the house. Drumming starts about 2 minutes in:
[youtube http://www.youtube.com/watch?v=6pKqK2WtsA4&feature=youtube_gdata_player]At this Montreal show, Josh Groban comes into the audience to sing a duet with a 17 year old, french speaking young lady. Singing starts at 2:45 or so. It gets a little misty for me watching this.
[youtube http://www.youtube.com/watch?v=RXAak-NamNE&feature=fvwrel]In this clip, Johnny Reid, (Canadian Country musician) invites a very young fan to be one of his backup singers.
[youtube http://www.youtube.com/watch?v=xFCB4opoEyw&feature=youtube_gdata_player]It is fun to watch Michael Buble’s surprise when he realizes the boy on stage can actually sing (‘holy shitballs mom!’). The boy, Sam Hollyman, has an album out now on iTunes:
[youtube http://www.youtube.com/watch?v=o6TKpkY4WcM&feature=youtube_gdata_player]At a U2 show, Bono invited a blind man up on stage who had a sign asking to play a song for his wife. It is a touching moment:
[youtube http://www.youtube.com/watch?v=uURKYk06Q7c&feature=youtube_gdata_player]A very clean cut 13 year old plays with Machine Head. Surfer meets Metal
[youtube http://www.youtube.com/watch?v=LoWjFOTGgbM&feature=youtube_gdata_player]Another kid gets up to play with Linkin Park. He takes control of the stage. (his ten second audition starts at 4:00, and then into Faint):
[youtube http://www.youtube.com/watch?v=YNkTQSLL9U4&feature=youtube_gdata_player]A sixteen year old young lady is put on the spot by P!nk and pulls it off:
[youtube http://www.youtube.com/watch?feature=player_embedded&v=HB4E9XVt5Kc]My Chemical Romance with a chair bound fan. Enthusiasm trumps all here:
[youtube http://www.youtube.com/watch?v=as4afIlV2rk&feature=youtube_gdata_player]Hutzpah award goes to this young man who jumped on stage *after* a Decemberists concert, and started playing the song that Colin and company had just finished playing:
[youtube http://www.youtube.com/watch?v=oYMCnFgGFCA&feature=youtube_gdata_player]Update: One of the commenters on this post pointed me to this clip from a concert in Sydney when The Hives bring a bass player on stage. They give him a wardrobe upgrade before they let him play. Skip ahead to 50 seconds in.
[youtube http://www.youtube.com/watch?v=DcX52rbm-xw]The most famous fan appearance is probably The Horse Tranquilizer Incident where after an unfortunate run-in with some rather strong drugs Keith Moon was unable to finish a show, and so Pete Townsend asked the audience “can anybody play the drums? I mean somebody good.” Scott Haplin, who hadn’t played drums in a year volunteered. Video Timeline :
- At 1:10 Pete tells the crowd that Keith’s passed out.
- At 3:50 Moon passes out a 2nd time
- At 5:30 Pete asks the audience, “can anybody play the drums? I mean somebody good.”
- At 6:30 Audience member Scott Halpin replaces keith moon
- At 8:19 Scot Halpin and the band take a bow…
That year, Scott Haplin was awarded Rolling Stone magazine’s “Pick-Up Player of the Year Award” for his historic performance during this show. As far as I know, Scott remains the only fan called on stage to win an award for his performance.
Why streaming recommendations are different than DVD recommendations at Netflix
From Why Netflix Never Implemented The Algorithm That Won The Netflix $1 Million Challenge
An interesting insight:
when people rent a movie that won’t arrive for a few days, they’re making a bet on what they want at some future point. And, people tend to have a more… optimistic viewpoint of their future selves. That is, they may be willing to rent, say, an “artsy” movie that won’t show up for a few days, feeling that they’ll be in the mood to watch it a few days (weeks?) in the future, knowing they’re not in the mood immediately. But when the choice is immediate, they deal with their present selves, and that choice can be quite different.
When I was a Netflix DVD subscriber the Seven Samurai sat on top of my TV for months. My present self never matched the optimistic view I had of my future self.
Xavier’s blog post on Netfix recommendation is worth the read. Dealing with a household with widely different tastes, the importance of the order of presentation of recommendations
Why do Music Hackers hack?
[vimeo http://vimeo.com/40027211 w=600]
A short film by Pauline de Zeew, with Paul King and Syd Lawrence
Syncing Echo Nest analysis to Spotify Playback
Posted by Paul in code, Music, The Echo Nest on April 9, 2012
With the recently announced Spotify integration into Rosetta Stone, The Echo Nest now makes available a detailed audio analysis for millions of Spotify tracks. This audio analysis includes summary features such as tempo, loudness, energy, danceability, key and mode, as well as a set of fine-grained segment features that describe details such as where each bar, beat and tatum fall and the detailed pitch, timbral and loudness content of each audio event in the song. These features can be very useful for driving Spotify applications that need to react to what the music sounds like – from advanced dynamic music visualizations like the MIDEM music machine or synchronized music games like Guitar Hero.
I put together a little Spotify App that demonstrates how to synchronize Spotify Playback with the Echo Nest analysis. There’s a short video here of the synchronization:
video on youtube: http://youtu.be/TqhZ2x86RXs
In this video you can see the audio summary for the currently playing song, as well as a display synchronized ‘bar’ and ‘beat’ labels and detailed loudness, timbre and pitch values for the current segment.
How it works:
To get the detailed audio analysis, call the track/profile API with the Spotify Track ID for the track of interest. For example, here’s how to get the track for Radiohead’s Karma Police using the Spotify track ID:
This returns audio summary info for the track, including the tempo, energy and danceability. It also includes a field called the analysis_url which contains an expiring URL to the detailed analysis data. (A very abbreviated excerpt of an analysis is contained in this gist).
To synchronize Spotify playback with the Echo Nest analysis we need to first get the detailed analysis for the now playing track. We can do this by calling the aforementioned track/profile call to get the analysis_url for the detailed analysis, and then retrieve the analysis (it is stored in JSON format, so no reformatting is necessary). There is one technical glitch though. There is no way to make a JSONP call to retrieve the analysis. This prevents you from retrieving the analysis directly into a web app or a Spotify app. To get around this issue, I built a little proxy at labs.echonest.com that supports a JSONP style call to retrieve the contents of the analysis URL. For example, the call:
http://labs.echonest.com/3dServer/analysis?callback=foo &url=http://url_to_the_analysis_json
will return the analysis json wrapped in the foo() callback function. The Echo Nest does plan to add JSONP support to retrieving analysis data, but until then feel free to use my proxy. No guarantees on support or uptime since it is not supported by engineering. Use at your own risk.
Once you have retrieved the analysis you can get the current bar, beat, tatum and segment info based upon the current track position, which you can retrieve from Spotify with: sp.getTrackPlayer().getNowPlayingTrack().position. Since all the events in the analysis are timestamped, it is straightforward to find a corresponding bar,beat, tatum and segment given any song timestamp. I’ve posted a bit of code on gist that shows how I pull out the current bar, beat and segment based on the current track position along with some code that shows how to retrieve the analysis data from the Echo Nest. Feel free to use the code to build your own synchronized Echo Nest/Spotify app.
The Spotify App platform is an awesome platform for building music apps. Now, with the ability to use Echo Nest analysis from within Spotify apps, it is a lot easier to build Spotify apps that synchronize to the music. This opens the door to a whole range of new apps. I’m really looking forward to seeing what developers will build on top of this combined Echo Nest and Spotify platform.
Writing an Echo Nest + Spotify App
Posted by Paul in code, Music, The Echo Nest on April 7, 2012
Last week The Echo Nest and Spotify announced an integration of APIs making it easy for developers to write Spotify Apps that take advantage of the deep music intelligence offered by the Echo Nest. The integration is via Project Rosetta Stone (PRS). PRS is an ID mapping layer in the API that allows developers to use the IDs from any supported music service with the Echo Nest API. For instance, a developer can request via the Echo Nest playlist API a playlist seeded with a Spotify artist ID and receive Spotify track IDs in the results.
This morning I created a Spotify App that demonstrates how to use the Spotify and Echo Nest APIs together. The app is a simple playlister with the following functions:
- Gets the artist for the currently playing song in Spotify
- Creates an artist radio playlist based upon the now playing artist
- Shows the playlist, allowing the user to listen to any of the playlist tracks
- Allows the user to save the generated playlist as a Spotify playlist.
The entire app, including all of the HTML, CSS and JavaScript, is 150 lines long.I’ve made all the code available in the github repository SpotifyEchoNestPlaylistDemo. Here are some of the salient bits. (apologies for the screenshots of code. WordPress.com has poor support for embedding sourcecode. I’ve been waiting for gist embeds for a year)
makePlaylistFromNowPlaying() – grabs the current track from spotify and fetches and displays the playlist from The Echo Nest.
fetchPlayst() – The bulk of the work is done in the fetchPlaylist method. This method makes a jsonp call to the Echo Nest API to generate a playlist seeded with the Spotify artist. The Spotify Artist ID needs to be massaged slightly. In the Echo Nest world Spotify artist IDs look like ‘spotify-WW:artist:12341234’ so we convert from the Spotify form to the Echo Nest form with the one liner:
var artist_id = artist.uri.replace('spotify', 'spotify-WW');
Here’s the code:
The function createPlayButton creates a doc element with a clickable play image, that when clicked, calls the playSong method, which grabs the Spotify Track ID from the song and tells Spotify to play it:
Update: I was using a deprecated method of playing tracks. I’ve updated the code and example to show the preferred method (Thanks @mager).
When we make the playlist call we include a buckets parameter requesting that spotify IDs are returned in the returned tracks. We need to reverse the ID mapping to go from the Echo Nest form of the ID to the Spotify form like so:
Saving the playlist as a spotify playlist is a 3 line function:
Installing and running the app
To install the app, follow these steps:
- make sure you have a Spotify Developer Account
- Make a ‘playlister’ directory in your Spotify apps folder (On a mac this is in ~/Spotify/playlister)
- Get the project files from github
- Copy the project files into the ‘playlister’ directory. The files are:
- index.html – the app (html and js)
- manifest.json – describes your app to Spotify. The most important bit is the ‘RequiredPermissions’ section that lists ‘http://*echonest.com’. Without this entry, your app won’t be able to talk to The Echo Nest.
- js/jquery.min.js – jquery
- styles.css – minimal css for the app
- play.png – the image for the play button
- icon.png – the icon for the app
To run the app type ‘spotify:app:playlister’ in the Spotify search bar. The app should appear in the main window.
Wrapping Up
Well, that’s it – a Spotify playlisting app that uses the Echo Nest playlist API to generate the playlist. Of course, this is just the tip of the iceberg. With the Spotify/Echo Nest connection you can easily make apps that use all of the Echo Nest artist data: artist news, reviews, blogs, images, bios etc, as well as all of the detailed Echo Nest song data: tempo, energy, danceability, loudness, key, mode etc. Spotify has created an awesome music app platform. With the Spotify/Echo Nest connection, this platform has just got more awesome.
Is music getting more profane?
This post has profanity in it. If you don’t like profanity, skip this post and instead just look at this picture of a cat. Otherwise, scroll on down to read about the rise and fall of profanity in music.
Now, on to the profanity …
It seems that every year the amount of profanity in music has increased. Today it seems that every other pop song drops the f-bomb, from P!nk’s ‘Fucking Perfect’ to Cee Lo’s ‘Fuck You’. I wondered if this apparent trend was real so I took a look at when certain obscene words started to show up in song titles to see if there are any obvious trends. Here’s the data:
The word ‘fuck’ doesn’t appear in a song title until 1977 when the band ‘The Way’ released ‘Fucking Police’ . This monumental song in music history seems to be lost to the Internet age. The only evidence that this song ever existed is this MusicBrainz entry. The second song with ‘fuck’ in the title, ‘To Fuck The Boss’ by Blowfly appeared in 1978. This sophmore effort is preserved on Youtube:
[youtube http://www.youtube.com/watch?v=J3wGresI0S4]The peak in usage of the word ‘fuck’ in song titles occurs in 2006 with 650 songs. Since then, peak usage has dropped off substantially, 2011 saw about the same ‘fuck’ frequency as 1999.
Usage of the word ‘shit’ has a similar profile:
The first usage of the word ‘shit’ in a song title was in 1966 in the song ‘I feel like homemade shit’ by The Fugs, which appeared on The Fugs first album (originally titled The Village Fugs Sing Ballads of Contemporary Protest, Point of Views, and General Dissatisfaction). Again the peak year of use is 2006 with 322 ‘shit’ songs that year.
Looking at these graphs, one would get the impression that use of profanity has grown substantially since the 70s and reached its peak a few years ago. However, there’s more to the data than that. Let’s look at a similar plot for a non-profane word:
This plot shows a very similar usage profile for the word ‘cat’, with substantial growth in use from the 70s until 2006 when it starts to taper off. (Yes, ‘cat’ was found in many songs before 1976, but I am not showing those in the plot). Why do ‘fuck’ and ‘cat’ have such similar profiles? It is not because their usage frequency has increased, it is because the total number of songs released has been increasing year-over-year until 2006, after which the number of new releases per year has been dropping off. We see more ‘fuck’s and ‘cat’s in 2006 because there were more songs released in 2006 than any other year. For a more accurate view we need to look at the relative usage changes. This plot shows the usage of the word ‘fuck’ relative to the usage of other words in song titles. Even when we look at the use of the word ‘fuck’ relative to other words there is a clear increasing trend.
Is music getting more profane? The answer is yes. The data show that the likelihood of a song with the word ‘fuck’ in the title has more than doubled since the 80s. And it doesn’t look like this trend has reached its peak yet. I think we shall continue to see a rise in use of language that gets a rise out of moms like Tipper Gore.
The Duke Listens! Archive
Before I joined the Echo Nest I worked in the research lab at Sun Microsystems. During my tenure at Sun I maintained a blog called ‘Duke Listens!’ where I wrote about things that I was interested in (mostly music recommendation, discovery, visualization, Music 2.0). When Oracle bought Sun a few years back they shut down the blogs for ex-employees and Duke Listens! was no more. However, a kind soul named John Henning spent quite a bit of time writing perl scripts to capture all the Duke Listens data. He stuck in on a CD and gave it to me. It has been sitting on my computer for about a year. This weekend, while hanging out on the Music Hack Day IRC I wrote some python (thanks BeautifulSoup), reformatted the blog posts, created some indices and pushed out a static version of the blog.
You can now visit the Duke Listens! Archive and look through more than a 1,000 blog posts that chronicle the 5 years of Music 2.0 history (from 2004 to 2009). Some favorite posts:
- My first MIR-related post (June 2004)
- My first hardcore MIR post (January 2005)
- A very wrong prediction about Apple (January 2005)
- I discover Radio Paradise (April 2005)
- First Google Music rumor (June 2005)
- First Amazon Music rumor (August 2005)
- First Pandora Post (September 2005)
- First mention of The Echo nest (October 2005)
- Why there’s no Google Music search (December 2005)
- First mention of Spotify (January 2007)
- My review of Spotify (November 2007)
- The Echo Nest goes live (March 2008)
- The Echo Nest launches their API (September 2008)
- My first look at iTunes genius recommendations (September 2008)
- My last post (February 2009)
Data Mining Music at SXSW
If you happen to be in Austin this week for SXSW consider attending my talk called Data Mining Music. It is all about the fun things you can discover about music when you have data about millions of songs and artists.
The talk is on Sunday, Marcy 11 at 5:00PM in the Rio Grande room of the Hilton Garden Inn. All the details are here: Data Mining Music
Hackathons are not nonsense
Dave Winer says that Hackathons are nonsense. Specifically he says:
Hackathons are how marketing guys wish software were made.
However, to make good software, requires lots of thought, trial and error, evaluation, iteration, trying the ideas out on other users, learning, thinking, more trial and error, and on and on. At some point you say it ain’t perfect, but it’s useful, so let’s ship. That process, if the software is to be any good, doesn’t happen in 24 hours. Sometimes it takes years, if the idea is new enough.
Dave says that software is hard and you can’t you can’t expect to build shippable software in a day. That’s certainly true, and if the goal of a hackathon was to get a bunch of developers together to build and ship commercial software in a day, I’d agree with him. But that’s not the goal of any of the hackathons I’ve attended.
I’ve participated in and/or helped organize perhaps a dozen Music Hack Days. At a Music Hack Day, people who are interested in music and technology get together for a weekend to learn about music tech and to build something with it. The goal isn’t to ship a software product, it is to scratch that personal itch to do something cool with music. The people who come to a Music Hack Day are often not in the music tech space, but are interested in learning about all the music APIs and tech available. They come to learn and then use what they’ve learned to build something. At the most recent Music Hack Day in San Francisco, 200 hackers built 60 hacks including new musical instruments, new music discovery tools, social music apps and music games.
Music Hack Days are not nonsense. They are incredibly creative weekends that have resulted in a 1,000 or more really awesome music hacks. Consider the hackathon to be the Haiku of programming. Instead of 17 syllables in 3 lines, a hacker has 24 hours. (Maybe we should call them Haikuthons;) I think the 24 hour constraint contributes to the creativity of the event.
Here are some of my favorite hacks built at recent Music Hack Days. Plenty of whimsy but no nonsense here:
- Drinkify – Answers the question “I’m listening to X, what should I drink?
- Invisible Instruments – Just what it says, musical instruments that you can’t see
- Bohemian Rhapsichord – Turns Queen’s Opus into a musical instrument
- Musaic – Discover music through photomoasics
- MIDEM Music Machine – a beautiful visualization of a song
- Tourrent Plans – Plan your tour based on where all the torrent downloaders are
- Stringer – a virtual string instrument
- The Swinger – Makes any song swing
Billboard wins!
Yep, the numbers are in. Out of 13 Grammy award, Billboard picked 7 correctly, while my web crawling approach picked 6. Congrats to the a Billboard editorial team for winning (this round!). Let me know where to send the milkshakes!
Here are the details. All the raw data is at Paul vs. Billboard.
| Category | Paul’s prediction | Billboard’s prediction | Actual Grammy | Who was right? |
| Album Of The Year | Adele | Adele | Adele | Both |
| Record Of The Year | Adele | Adele | Adele | Both |
| Song of the Year | Adele | Bruno Mars | Adele | Paul |
| Best New Artist | Bon Iver | The Band Perry | Bon Hiver | Paul |
| Best Pop Solo Performance | Adele | Lady Gaga | Adele | Paul |
| Best Pop Duo/Group Performance | Maroon 5 and Christina Aguilera | Tony Bennett | Tony Bennet | Billboard |
| Best R&B Album | Kelly Price | Chris Brown | Chris Brown | Billboard |
| Best Country Album | Jason Aldean | Taylor Swift | Lady Antebellum | Neither |
| Best Dance/Electronica Album | Cut/Copy | Skrillex | Skrillex | Billboard |
| Best Rock Album | Red Hot Chili Peppers | Foo Fighters | Foo Fighters | Billboard |
| Best Alternative Music Album | Bon Iver | Bon Iver | Bon Hiver | Both |
| Best Latin Pop, Rock or Urban Album | Calle 13 | Calle 13 | Maná | Neither |
| Best Rap Album | Kanye West & Jay-Z | Nicki Minaj | Kanye West | Neither |













