Archive for category The Echo Nest
The Festival Explorer – Austin Edition
Posted by Paul in The Echo Nest on March 11, 2011
Looking for a tool to help you find the best bands to see in Austin during SXSW? Check out the Festival Explorer – Austin Edition:
It uses Echo Nest data like hotttness, top terms, similar artists to give you all sorts of ways to explore the over 2,000 artists playing in Austin during the next week. The Festival Explorer is a free iPhone app, available in the app store now: Festival Explorer Austin Edition
My favorite music-related panels for SXSW Interactive
Posted by Paul in events, Music, The Echo Nest on March 7, 2011
Spring break for geeks is nearly upon us. If you are going to SXSW interactive, and are interested in what is is going on at the intersection of music and technology, be sure to check out these panels.
- Love, Music & APIs – Dave Haynes (SoundCloud) /Matt Ogle (The Echo Nest) – In the old days it was DJs, A&R folks, labels and record store owners that were the gatekeepers to music. Today, we are seeing a new music gatekeeper emerge… the developer. Using open APIs, developers are creating new apps that change how people explore, discover, create and interact with music. But developers can’t do it alone. They need data like gig listings, lyrics, recommendation tools and, of course, music! And they need it from reliable, structured and legitimate sources. In this presentation we’ll discuss and explore what is happening right now in the thriving music developer ecosystem. We’ll describe some of the novel APIs that are making this happen and what sort of building blocks are being put into place from a variety of different sources. We’ll demonstrate how companies within this ecosystem are working closely together in a spirit of co-operation. Each providing their own pieces to an expanding pool of resources from which developers can play, develop and create new music apps across different mediums – web, mobile, software and hardware. We’ll highlight some of the next-generation of music apps that are being created in this thriving ecosystem. Finally we’ll take a look at how music developers are coming together at events like Music Hack Day, where participants have just 24 hours to build the next generation of music apps. Someone once said, “APIs are the sex organs of software. Data is the DNA.” If this is true, then Music Hack Days are orgies.
- Finding Music With Pictures: Data Visualization for Discovery – Paul Lamere (shamelessly self promoting) – The Echo Nest – With so much music available, finding new music that you like can be like finding a needle in a haystack. We need new tools to help us to explore the world of music, tools that can help us separate the wheat from the chaff. In this panel we will look at how visualizations can be used to help people explore the music space and discover new, interesting music that they will like. We will look at a wide range of visualizations, from hand drawn artist maps, to highly interactive, immersive 3D environments. We’ll explore a number of different visualization techniques including graphs, trees, maps, timelines and flow diagrams and we’ll examine different types of music data that can contribute to a visualization. Using numerous examples drawn from commercial and research systems we’ll show how visualizations are being used now to enhance music discovery and we’ll demonstrate some new visualization techniques coming out of the labs that we’ll find in tomorrow’s music discovery applications.
- Connected Devices, the Cloud & the Future of Music – Brenna Ehrlich, Malthe Sigurdsson, Steve Savoca, Travis Bogard – Discovering and listening to music today is a fragmented experience. Most consumers discover in one place, purchase in another, and listen somewhere else. While iTunes remains the dominant way people buy and organize their digital music collections, on-demand music services like Rdio, MOG and Spotify are creating new ways to discover, play, organize, and share music. The wide-spread adoption of smartphones and connected devices, along with the growing ubiquity of wireless networks, has increased the promise of music-in-the-cloud, but are consumers ready to give up their iTunes and owning their music outright? While, early adopters and music enthusiasts are latching on, what will it take for the mainstream to shift their thinking? This session will explore how connected devices and cloud services will affect the way consumers find and buy music going forward.
- Expressing yourself Musically with Mobile Technology – Ge Wang – Smule – The mobile landscape as we know it is focused heavily on gaming, productivity and social media applications. But as mobile technology continues to advance and phones become smarter, people will search for even more intimate, immersive and interactive ways of expressing themselves. Today, mobile technologies have made music creation easy, affordable and accessible to the masses, enabling users of all ages, abilities and backgrounds, to create and share music, regardless of previous musical knowledge. Whether you’re a fan of hip hop, classic, pop or video game theme music, there is an app for everyone. And the entertainment industry has taken notice – almost every big name artist or brand has an app for mobile devices. Most of them are just fancy message boards providing information, but some are pushing the limits of what it means to interact with the artist or brand. From the palm of your hand you can Auto-Tune your voice to sound like your favorite hip hop star, play an instrument designed by Jorden Ruddess of Dream Theater or join a virtual Glee club. Each of these artists and brands are building communities thru mobile apps that provide anyone the ability to explore their inner star. This presentation will discuss how advances in mobile technology have opened up a new world of expression to everyone and enabled users to broadcast their own musical talents across the globe.
- How Digital Media Drives International Collaboration in Music – Farb Nivi, Gunnar Madsen, Russell Raines, Stephen Averill, Troy Campbell – The House of Songs is an Austin, TX based project focusing on musical creativity through international collaboration. The House has been operating since September 2009 and has provided the foundation for creative collaboration between some of the strongest Austin and Scandinavian songwriters. Through these experiences, the participating songwriters have created numerous potential relationships and have attained unique experiences benefiting their musical careers. This panel will discuss how digital media influences these collaboration efforts in the present and in the future. The conversation will also cover current trends in this area, challenges artists face in developing and expanding their audience, how artists today can succeed in procuring worldwide digital revenue, and ultimately emphasize the need of having this conversation.
- Metadata: The Backbone Of Digital Music Commerce – Bill Wilson, Christopher Read, Fayvor Love, Kiran Bellubb – Who cares about metadata? You should. In a world where millions of digital music transactions take place on a daily basis, it’s more important than ever that music, video, and application content appears correctly in digital storefronts, customers can find them, and that the right songwriter, artist and/or content owner gets paid. This panel will review the current landscape and make sense of the various identifiers such as ISRC, ISWC, GRID, ISNI as well as XML communications standards such as DDEX ERN and DSR messages. We’ll also cover why these common systems are critical as the backbone of digital music commerce from the smallest indie artist to the biggest corporate commerce partners.
- Music & Metadata: Do Songs Remain The Same? – Jason Schultz, Jess Hemerly, Larisa Mann – Metadata may be an afterthought when it comes to most people’s digital music collections, but when it comes to finding, buying, selling, rating, sharing, or describing music, little matters more. Metadata defines how we interact and talk about music—from discreet bits like titles, styles, artists, genres to its broader context and history. Metadata builds communities and industries, from the local fan base to the online social network. Its value is immense. But who owns it? Some sources are open, peer-produced and free. Others are proprietary and come with a hefty fee. And who determines its accuracy? From CDDB to MusicBrainz and Music Genome Project to AllMusic, our panel will explore the importance of metadata and information about music from three angles. First, production, where we’ll talk about the quality and accuracy of peer-produced sources for metatdata and music information, like MusicBrainz and Wikipedia, versus proprietary sources, like CDDB. Second, we’ll look at the social importance of music data, like how we use it to discuss music and how we tag it to enhance music description and discovery. Finally, we’ll look at some legal issues, specifically how patent, copyright, and click-through agreements affect portability and ownership of data and how metadata plays into or out of the battles over “walled garden” systems like Facebook and Apple’s iEmpire. We’ll also play a meta-game with metadata during the panel to demonstrate how it works and why it is important.
- Neither Moguls nor Pirates: Grey Area Music Distribution – Alex Seago, Heitor Alvelos, Jeff Ferrell, Pat Aufderheide, Sam Howard-Spink – The debate surrounding music piracy versus the so-called collapse of the music industry has largely been bipolar, and yet so many other processes of music distribution have been developing. From online “sharity” communities that digitize obscure vinyl never released in digital format (a network of cultural preservation, one could argue), all the way to netlabels that could not care less about making money out of their releases, as well as “grime” networks made up of bedroom musicians constantly remixing each other, there is a vast wealth of possibilities driving music in the digital world. This panel will present key examples emerging from this “grey area”, and discuss future scenarios for music production and consumption that stand proudly outside the bipolar box.
- SXSW Music Industry Geeks Meetup – Todd Hansen – As the SXSW Interactive Festival continues to grow, it often becomes harder to discover /network with the specific type of people you want to network with. Hence a full slate of daytime Meet Ups are scheduled for the 2011 event. These Meet Ups are definitely not a panel session — nor do they offer any kind of formal presentation or AV setup. On the contrary, these sessions are a room where many different conversations and (and will) go on at once. This timeslot is for technology geeks working in the music industry to network with other SXSW Interactive, Gold and Platinum network with other technology geeks in this industry. Cash bar onsite.
There you go! See you all soon in Austin.
Finding the most dramatic bits in music
Posted by Paul in code, fun, Music, The Echo Nest on February 20, 2011
Evanescence is one of my guilty listening pleasures. I enjoy how Amy Lee’s voice is juxtaposed against the wall of sound produced by the rest of the band. For instance, in the song Imaginary, there’s a 30 seconds of sweet voice + violins before you get slammed by the hammer of the gods:
This extreme change in energy makes for a very dramatic moment in the music. It is one of the reasons that I listen to progressive rock and nu-metal (despite the mockery of my co-workers). However, finding these dramatic gems in the music is hard – there’s a lot of goth- and nu-metal to filter through, and much of it is really bad. After even just a few minutes of listening I feel like I’m lost at a Twicon. What I need is a tool to help me find these dramatic moments, to filter through the thousands of songs to find the ones that have those special moments when the beauty comes eye to eye with the beast.
My intuition tells me that a good place to start is to look at the loudness profile for songs with these dramatic moments. I should expect to see a sustained period of relatively soft music followed by sharp transition to a sustained period of loud music. This is indeed what we see:
This plot shows a windowed average of the Echo Nest loudness for the first 50 seconds of the song. In this plot we see a relatively quiet first 10 seconds (hovering between -21 and -18 db), followed by an extremely loud section of around -10db). (Note that this version of the song has a shorter intro than the version in the Youtube video). If we can write some code to detect these transitions, then we will have a drama detector.
The Drama Detector: Finding a rising edge in a loudness profile is pretty easy, but we want to go beyond that and make sure we have a way to rank then so that we can find the most dramatic changes. There are two metrics that we can use to rank the amount of drama: (1) The average change in loudness at the transition and (2) the length of the quiet period leading up to the transition. The bigger the change in volume and the the longer it has been quiet means more drama. Let’s look at another dramatic moment as an example:
The opening 30 seconds of Blackest Eyes by Porcupine Tree fit the dramatic mold. Here’s an annotated loudness plot for the opening:
The drama-finding algorithm simply looks for loudness edges above a certain dB threshold and then works backward to find the beginning of the ‘quiet period’. To make a ranking score that combines both the decibel change and the quiet period, I tried the simplest thing that could possible work which is to just multiply the change in decibels by the quiet period (in seconds). Let’s try this metric out on a few songs to see how it works:
- Porcupine Tree – Blackest Eyes – score: 18 x 24 = 432
- Evanescence – Imaginary (w/ 30 second intro) – score: 299
- Lady Gaga – Poker Face- score: 82 – not very dramatic
- Katy Perry – I kissed a girl – score: 33 – extremely undramatic
This seems to pass the sanity test, dramatic songs score high, non-dramatic songs score low (using my very narrow definition of dramatic). With this algorithm in mind, I then went hunting for some drama. To do this, I found the 50 artists most similar to Evanescence, and for each of these artists I found the 20 most hotttest songs. I then examined each of these 1,000 songs and ranked them in dramatic order. So, put on your pancake and eye shadow, dim the lights, light the candelabra and enjoy some dramatic moments
First up is the wonderfully upbeat I want to Die by Mortal Love. This 10 minute long song has a whopping drama score of 2014. There a full two minutes of quiet starting at 5 minutes into the song before the dramatic moment (with 16 dB of dramatic power!) occurs:
The dramatic moment occurs at 7:12 seconds into the song – but I’m not sure if it is worth the wait. Not for me, but probably something they could play at the Forks Washington High School prom though.
The song Jillian by Within Temptation gets a score of 861 for this dramatic opening:
Now that’s drama! Take a look at the plot:
The slow build – and then the hammer hits. You can almost see the vampires and the werewolves colliding in a frenzy.
During this little project I learned that most of the original band members of Evanescence left and formed another band called We are the Fallen – with a very similar sound (leading me to suspect that there was a whole lot of a very different kind of drama in Evanescence). Here’s their dramatic Tear The World Down (scores a 468):
Finally we have this track Maria Guano Apes – perhaps my favorite of the bunch:
Update: @han wondered how well the dramatic detector faired on Romantic-era music. Here’s a plot for Berlioz’s Symphony Fantastique: March to the Scaffold:
This gets a very dramatic score 361. Note that in the following rendition the dramatic bit that aligns with the previous plot occurs at 1:44:
Well – there you have it , a little bit of code to detect dramatic moments in music. It can’t, of course, tell you whether or not the music is good, but it can help you filter music down to a small set where you can easily preview it all. To build the drama detector, I used a few of The Echo Nest APIs including:
- song/search – to search for songs by name and to get the analysis data (where all the detailed loudness info lives)
- artist/similar – to find all the similar artists to a seed (in this case Evanescence)
The code is written in Python using pyechonest, and the plots were made using gnuplot. If you are interested in finding your own dramatic bits let me know and I’ll post the code somewhere.
Spot the sweatsedo!
Posted by Paul in fun, The Echo Nest, video on February 16, 2011
While all of the hackers were making music hacks at last weekend’s Music Hack Day, the non-technical staff from The Echo Nest were working on their own hack – a video of the event. They’ve posted it on Youtube. It is pretty neat – with a cool remix soundtrack by Ben Lacker.
But wait … they also tweeted this contest:
To win the contest, you had to count the number of Echo Nest tee-shirts and Sweatsedos appear in the video and tweet the results. It turns out it was a really hard contest. My first try I counted 12, but there were many more, some were very very subtle. But we do have a winner! Here’s the answer key:
Last night at 7:30 PM EST one Kevin Dela Rosa posted this tweet:
Congrats to Kevin for his excellent counting ability! Kevin please email your size and shipping info to Paul@echonest.com and we’ll get you into the smooth and velvety blue!
The Labyrinth of Genre
Posted by Paul in code, data, tags, The Echo Nest, visualization on January 16, 2011
I’m fascinated with how music genres relate to each other, especially how one can use different genres as stepping stones as a guide through the vast complexities of music. There are thousands of genres, some like rock or pop represent thousands of artists, while some like Celtic Metal or Humppa may represent only a handful of artists. Building a map by hand that represents the relationships of all of these genres is a challenge. Is Thrash Metal more closely related to Speed Metal or to Power Metal? To sort this all out I’ve built a Labyrinth of Genre that lets you explore the many genres. The Labyrinth lets you wander though about a 1000 genres, listening to samples from representative artists.
Click on a genre and the labyrinth will be expanded to show similar half a dozen similar genres and you’ll hear songs in the genre.
I built the labyrinth by analyzing a large collection of last.fm tags. I used the cosine distance of tf-idf weighted tagged artists as a distance metric for tags. When you click on a node, I attach the six closest tags that haven’t already been attached to the graph. I then use the Echo Nest APIs to get all the media.
Even though it’s a pretty simple algorithm, it is quite effective in grouping similar genre. If you are interested in wandering around a maze of music, give the Labyrinth of Genre a try.
The Music Maze
Posted by Paul in code, fun, Music, The Echo Nest, video, visualization, web services on December 20, 2010
I wrote an application over the weekend called Music Maze. The Music Maze lets you wander through the maze of similar artists until you find something you like. You can give it a try here: The Music Maze (be forewarned, the app plays music upon loading).
We’ve seen the idea behind the Music Maze in other apps like Musicovery and Tuneglue’s Music Map. The nifty thing about the Music Maze is that I didn’t have to write a single line of server code to make it all happen. The Music Maze web app talks directly to The Echo Nest API. There’s no middle man. The artist graph, the album art, the links to audio – everything are pulled on demand from the Echo Nest API. This is possible because the Echo Nest API now supports JSONP requests (in beta, full release coming soon!). With JSONP an AJAX app can escape the Javascript sandbox and make calls to 3rd party web services. No need for me to set up a server to proxy calls to the Echo Nest, no Apache or Tomcat, no MySQL, no worries about scaling. This makes it incredibly easy for me to host and deploy this app. I just toss my HTML, Javascript and CSS files into an Amazon S3 bucket, make them world readable, and I’m done. It really has never been easier to create Music Apps. This whole app is less than 500 lines of javascript, written in a few hours on a Sunday morning while the rest of the family are still asleep. It is great to see all of these technologies coming together to make easy to create music apps.
(Be sure to check out the JavaScript InfoVis Toolkit . It does all of the the graphical heavy lifting in this app. It’s pretty neat.)
Jennie’s ultimate road trip
Posted by Paul in code, fun, Music, The Echo Nest on October 20, 2010
Last weekend at Music Hack Day Boston, I teamed up with Jennie, my 15-year-old daughter, to build her idea for a music hack which we’ve called Jennie’s Ultimate Road Trip. The hack helps you plan a road trip so that you’ll maximize the number of great concerts you can attend along the way. You give the app your starting and ending city, your starting and ending dates, and the names of some of your favorite artists and Jennie’s Ultimate Road Trip will search through the many events to find the ones that fit your route schedule that you’d like to see and gives you an itinerary and map.
We used the wonderful SongKick API to grab events for all the nearby cities. I was quite surprised at the how many events SongKick would find. For just a single week, in the geographic area between Boston and New York City, SongKick found 1,161 events with 2,168 different artists. More events and more artists makes it easier to find a route that will give a satisfying set of concerts – but it can also make finding a route a bit more computationally challenging too (more on that later). Once we had the set of possible artists that we could visit, we needed to narrow down the list of artists to the ones would be of most interest to the user. To do this we used the new Personal Catalogs feature of the Echo Nest API. We created a personal catalog containing all of the potential artists (so for our trip to NYC from Boston, we’d create a catalog of 2,168 artists). We then used the Echo Nest artist similarity APIs to get recommendations for artists within this catalog. This yielded us a set of 200 artists that best match the user’s taste that would be playing in the area.
The next bit was the tricky bit – first, we subsetted the events to just include events for the recommended set of artists. Then we had to build the optimal route through the events, considering the date and time of the event, the preference the user has for the artist, whether or not we’ve already been to an event for this artist on the trip, how far out of our way the venue is from our ultimate destination and how far the event is from our previous destination. For anyone who saw me looking grouchy on Sunday morning during the hack day it was because it was hard trying to figure out a good cost function that would weigh all of these factors: artist preference, travel time and distance between shows, event history. The computer science folks who read this blog will recognize that this route finding is similar to the ‘travelling salesman problem‘ – but with a twist, instead of finding a route between cities, which don’t tend to move around too much, we have to find a path through a set of artist concerts where every night, the artists are in different places. I call this the ‘travelling rock star’ problem. Ultimately I was pretty happy with how the routing algorithm, it can find a decent route through a thousand events in less than 30 seconds.
Jennie joined me for a few hours at the Music Hack Day – she coded up the HTML for the webform and made the top banner – (it was pretty weird to look over on her computer and see her typing in raw HTML tags with attached CSS attributes – kids these days). We got the demo done in time – and with the power of caching it will generate routes and plot them on a map using the Google API. Unfortunately, if your route doesn’t happen to be in the cache, it can take quite a bit of time to get a route out of the app – gathering events from SongKick, getting the recommendations from the Echo Nest, and finding the optimal route all add up to an app that can take 5 minutes before you get your answer. When I get a bit of time, I’ll take another pass to speed things up. When it is fast enough, I’ll put it online.
It was a fun demo to write. I especially enjoyed working on it with my daughter. And we won the SongKick prize, which was pretty fantastic.
Great Caesars’ Ghost!
Posted by Paul in Music, The Echo Nest on October 18, 2010
The Echo Nest now has an official Editor in Chief. Eliot Van Buskirk has joined the Echo Nest staff. He’s writing about music apps at Evolver.fm. He already has a whole bunch of great writeups about Music Hack Day Boston and all of the projects that have come out of it. Check it out: evolver.fm
The Echo Nest gets Personal
Posted by Paul in code, Music, playlist, recommendation, remix, The Echo Nest, web services on October 15, 2010
Here at the Echo Nest just added a new feature to our APIs called Personal Catalogs. This feature lets you make all of the Echo Nest features work in your own world of music. With Personal Catalogs (PCs) you can define application or user specific catalogs (in terms of artists or songs) and then use these catalogs to drive the behavior of other Echo Nest APIs. PCs open the door to all sorts of custom apps built on the Echo Nest platform. Here are some examples:
Create better genius-style playlists – With PCs I can create a catalog that contains all of the songs in my iTunes collection. I can then use this catalog with the Echo Nest Playlist API to generate interesting playlists based upon my own personal collection. I can create a playlist of my favorite, most danceable songs for a party, or I can create a playlist of slow, low energy, jazz songs for late night reading music.
Create hyper-targeted recommendations – With PCs I can make a catalog of artists and then use the artist/similar APIs to generate recommendations within this catalog. For instance, I could create an artist catalog of all the bands that are playing this weekend in Boston and then create Music Hack Day recommender that tells each visitor to Boston what bands they should see in Boston based upon their musical tastes.
Get info on lots of stuff – people often ask questions about their whole music collection. Like, ‘what are all the songs that I have that are at 113 BPM?‘, or ‘what are the softest songs?’ Previously, to answer these sorts of questions, you’d have to query our APIs one song at a time – a rather tedious and potentially lengthy operation (if you had, say, 10K tracks). With PCs, you can make a single catalog for all of your tracks and then make bulk queries against this catalog. Once you’ve created the catalog, it is very quick to read back all the tempos in your collection.
Represent your music taste – since a Personal Catalog can contain info such as playcounts, skips, and ratings for all of the artists and songs in your collection, it can serve as an excellent proxy to your music taste. Current and soon to be released APIs will use personal catalogs as a representation of your taste to give you personalized results. Playlisting, artist similarity, music recommendations all personalized based on you listening history.
These examples just scratch the surface. We hope to see lots of novel applications of Personal Catalogs. Check out the APIs, and start writing some code.











