Posts Tagged sanfranmusictech

Music Hack Days are awesome

Flickr photo by paulamarttila

There are now two Music Hack Days on the calendar for the next month:  Amsterdam on April 24th, 25th and San Francisco on May 15th, 16th.

The Echo Nest will be participating in both. We just love music hack days – the amount of creativity that gets packed into one room during a hack day is just amazing.  Plus it is a great way to meet developers face-to-face to see how they use are stuff.    Brian will be representing at the Amsterdam event (I think Brian really likes Amsterdam).  He’ll be showing off some new APIs that we are all really excited about – plus we’ll be giving our new API infrastructure a workout – everything will (hopefully) be faster, more reliable, better documented.

Right on the heals of the Amsterdam event is the San Francisco Hack Day.  It is being hosted at the Automattic Lounge on Pier 38 right in the city.  The event is filling up really fast – there’s lots of pent up demand for a hack day in SF – and the mention in Techcrunch didn’t hurt.  At the SF hack day if the stars align we’ll be releasing another new API feature – one that is perhaps the most requested feature of our APIs. Can’t wait.  Oh and the SF Music Hack Day is right between two other cool music events:  The Bay Area Computer Music Technology Group (BArCMuT) meetup on Thursday May 13,  and the SF Music Tech Summit on Monday May 17th.

Wondering what a music hack day is like?  Check out these photos: Flickr Slide show of Music Hack days past

, , , ,

Leave a comment

Spotify + Echo Nest == w00t!

spotify-logoYesterday, at the SanFran MusicTech Summit, I gave a sneak preview that showed how Spotify is tapping into the Echo Nest platform to help their listeners explore for and discover new music.  I must say that I am pretty excited about this. Anyone who has read this blog and its previous incarnation as ‘Duke Listens!’ knows that I am a long time enthusiast of Spotify (both the application and the team).    I first blogged about Spotify way back in January of 2007 while they were still in stealth mode. I blogged about the Spotify haircuts,   and their serious demeanor:

Those crazy Spotify guys

Those crazy Spotify guys

I blogged about the Spotify application when it was released to private beta: Woah – Spotify is pretty cool, and continued to blog about them every time they added another cool feature.

I’ve been a daily user of Spotify for 18 months now. It is one of my favorite ways to listen to music on my computer.  It gives me access to just about any song that I’d like to hear (with a few notable exceptions – still no Beatles for instance).

It is clear to anyone who uses Spotify for a few hours that having access to millions and millions of songs can be a bit daunting.   With so many artists and songs to chose from, it can be hard to decide what to listen to – Barry Schwartz  calls this the Paradox of Choice –  he says too many options can be confusing and can create anxiety in a consumer.   The folks at Spotify understand this. From the start they’ve been building tools to help make it easier for listeners to find music.  For instance, they allow you to easily share playlists with your friends.   I can create a music inbox playlist that any Spotify user can add music to. If I give the URL to my friends (or to my blog readers) they can add music that they think I should listen to.

Now with the Spotify / Echo Nest connection, Spotify is going one step further in helping their listeners deal with the paradox of choice. They are providing tools to make it easier for people to explore for and discover new music.  The first way that Spotify is tapping in to the Echo Nest platform is very simple, and intuitive.  Right click on a playlist, and select ‘Extend Playlist’.  When you do that, the playlist will automatically be extended with songs that fit in well with songs that are already in the playlist.  Here’s an example:

spotify-echonest-example.1.1

So how is this different from any other music recommender?   Well, there are a number of things going on here.  First of all, most music recommenders rely on collaborative filtering (a.k.a. the wisdom of the crowds), to recommend music.  This type of music recommendation works great for popular and familiar artists recommendations … if you like the Beatles, you may indeed like the Rolling Stones.  But Collaborative Filtering (CF) based recommendations don’t work well when trying to recommend music at the track level.  The data is often just to sparse to make recommendations.  The wisdom of the crowds model fails when there is no crowd.  When one is dealing with a Spotify-sized music collection of many millions of songs, there just isn’t enough  user data to give effective recommendations for all of the tracks. The result is that popular tracks get recommended quite often, while less well known music is ignored.  To deal with this problem many CF-based recommenders will rely on artist similarity and then select tracks at random from the set of similar artists.  This approach doesn’t always work so well, especially if you are trying to make playlists with the recommender. For example, you may want a playlist of acoustic power ballads by hair metal bands of the 80s.  You could seed the playlist with a song like Mötley Crüe’s Home Sweet Home, and expect to get similar power ballads, but instead you’d find your playlist populated with standard glam metal fair, with only a random chance that you’d have other acoustic power ballads.  There are a boatload of other issues with wisdom of the crowds recommendations – I’ve written about them previously, suffice it to say that it is a challenge to get a  CF-based recommender to give  you good track-level recommendations.

The Echo Nest platform takes a different approach to track-level recommendation. Here’s what we do:

  • Read and understand what people are  saying about music – we crawl every corner of the web and read every news article, blog post, music review and web page for every artist, album and track.  We apply statistical and natural language processing to extract meaning from all of these words. This gives us a broad and deep understanding of the global online conversation about music
  • Listen to all music – we apply signal processing and machine learning algorithms to audio to extract a number perceptual features about music.  For every song, we learn a wide variety of attributes about the song including the timbre, song structure, tempo, time signature, key, loudness  and so on. We know, for instance, where every drum beat falls in Kashmir, and where the guitar solo starts in Starship Trooper.
  • We combine this understanding of what people are saying about music and our understanding of what the music sounds like to build a model that can relate the two – to give us a better way of modeling a listeners reaction to music.  There’s some pretty hardcore science and math here.  If you are interested in the gory details, I suggest that you read Brian’s Thesis: Learning the meaning of music.

What this all means is that with the Echo Nest platform, if you want to make a playlist of acoustic hair metal power ballads, we’ll be able to do that – we know who the hair metal bands are, and we know what a power ballad sounds like.  And since we don’t rely on the wisdom of the crowds for recommendation we can avoid some of the nasty problems that collaborative filtering can lead to.  I think that when people get a chance to play with the ‘Extend Playlist’ feature they’ll be happy with the listening experience.

It was great fun giving the Spotify demo at the SanFran MusicTech Summit.  Even though Spotify is not available here in the U.S., the buzz that is occuring in Europe around Spotify is leaking across the ocean. When I announced that Spotify would be using the Echo Nest, there’s was an audible gasp from the audience.   Some people were seeing Spotify for the first time, but everyone knew about it. It was great to be able to show Spotify using the Echo Nest.  This demo was just a sneak preview.  I expect there will be lots more interestings to come. Stay tuned.

, , , ,

21 Comments

SanFran Music Tech summit

This weekend I’ll be heading out to San Francisco to attend the SanFran MusicTech Summit.  The summit is a gathering of  musicians, suits, lawyers, and techies with a focus on the convergence of music, business, technology and the law.  There’s quite a set of music tech luminaries that will be in attendance, and the schedule of panels looks fantastic.

I’ll be moderating a panel on Music Recommendation Services.  There are some really interesting folks on the panel:  Stephen White from Gracenote,  Alex Lascos from BMAT, James Miao from the Sixty One and Michael Papish from Media Unbound.      I’ve been on a number of panels in the last few years. Some have been really good, some have been total train wrecks.    The train wrecks occur when (1) panelists have an opportunity to show powerpoint slides, (2) a business-oriented panelist decides that the panel is just another sales call, (3) the moderator loses control and the panel veers down a rat hole of irrelevance.  As moderator, I’ll try to make sure the panel doesn’t suck .. but already I can tell from our email exchanges that this crew will be relevant, interesting and funny.  I think the panel will be worth attending.

We are already know some of the things that we want to talk about in the panel:

  • Does anyone really have a problem finding new music? Is this a problem that needs to be solved?
  • What makes a good music recommendation?
  • What’s better  – a human or a machine recommender?
  • Problems in high stakes evaluations

And some things that we definitely do not want to talk about:

  • Business models
  • Music industry crisis

If you are attending the summit,  I hope you’ll attend the panel.

, , ,

2 Comments