Archive for category hacking

The Drop Machine

I spent last weekend in Cannes, participating in the MIDEM Hack Day – an event where music hackers from around the globe gather to hack on music. My hack is called The Drop Machine.  It is a toy web app that plays nothing but the drops.  Here’s a video demo of it:

[youtube http://youtu.be/4C6a-MqAF_A]

The interesting bit in this hack is how The Drop Machine finds the drops.  I’ve tried a number of different ways to find the drops in the past – for instance, the app Where’s the Drama found the most dramatic bits of music based on changes in music dynamics. This did a pretty good job of finding the epic builds in certain kinds of music, but it wasn’t a very reliable drop detector. The Drop Machine takes a very different approach – it crowd sources the finding of the drop. And it turns out, the crowd knows exactly where the drop is.  So how do we crowd source finding the drop? Well, every time you scrub your music player to play a particular bit of music on Spotify, that scrubbing is anonymously logged. If you scrub to the chorus or the guitar solo or the epic drop, it is noted in the logs. When one person scrubs to a particular point in a song, we learn a tiny bit about how that person feels about that part of the song – perhaps they like it more than the part that they are skipping over  – or perhaps they are trying to learn the lyrics or the guitar fingering for that part of the song. Who’s to say? On an individual level, this data wouldn’t mean much. The cool part comes from the anonymous aggregate behavior of millions of listeners, from which a really detailed map of the song emerges.  People scrub to just before the best parts of the song to listen to them.  Let’s take a look at a few examples.

For starters here’s a plot that shows the most listened to part of the song In the Air Tonight by Phil Collins based upon scrubbing behavior:

2015-06-09 at 6.58 AM

The prominent peak at 3:40 is the point when the drums come in.  Based upon scrubbing behavior alone, we are able to find arguably the most interesting bit of that song.

Here’s another example – Whole Lotta Love by Led Zeppelin:

2015-06-09 at 7.02 AM

The trough at 1:40 corresponds to the psychedelic bits while the peak at 3:20 is the guitar solo. Again, by looking at scrubbing behavior we get a really good indication of what parts of a song listeners enjoy the most.

When we look at scrubbing behavior for dance music, especially dubstep and brostep, we see a very characteristic strong peak, usually at around a minute into the song. This is invariably ‘the drop’. Here are some examples:

2015-06-09 at 7.05 AM

 

 

 

2015-06-09 at 7.08 AM

 

 

 

2015-06-09 at 7.09 AM

The scrubbing behavior not only shows us where the drop is, but it also shows us how intense the drop is – drops with lots of appeal get lots of attention (and lots of scrubs) while songs with milder drops get less attention. Here’s a milder drop by Skrillex:

 

 

2015-06-09 at 7.14 AM

Compare that to the much more intense drop:

2015-06-09 at 7.15 AM

Songs with more intense drops have more prominent scrubbing and listening peaks at the drop than others.  The Drop Machine uses the prominence of the peak at the drop to find the songs with the most intense drops.

Putting it all together, the Drop Machine searches through the most popular dance, dubstep and brostep tracks and finds the ones with the most prominent listening peaks based upon scrubbing behavior. It then surfaces these tracks into a playlist, and then plays 10 seconds of each track centered around the drop. The result is non-stop drop. Add in a bit of animation synchronized to the music and that’s the Drop Machine.

Currently, the Drop Machine is an internal-use only hack, I’m working on making a public version, so hopefully the world won’t have to wait too long before you all can listen to the Drop Machine.

,

4 Comments

Infinite Jukebox improved

It has been over two years since the Infinite Jukebox was first released after Music Hack Day Boston 2012.  Since then millions of people have spent nearly a million hours listening to infinite versions of their favorite songs. It has been my most popular hack.

There has always been a problem with the Infinite Jukebox. Certain songs have sections with very dense interconnections. For these songs the Infinite Jukebox would sometimes get stuck playing the same section of the song for many minutes or hours before breaking free.  This morning I finally sat down and worked out a good way to deal with this problem. The Infinite Jukebox will now try to steer the song toward the beats that have been played the least. When the jukebox is deciding which beat to play next, it will search through all the possible future paths up to five beats into the future to find the path that brings the jukebox to the least played part of the song.  The result is that we exit out of the rats nest of connections rather quickly.  The code is quite succinct – just 20 lines in one recursive function.  Good payback for such a small amount of code.

Infinite_Jukebox_for_Turn_Down_by_Rittz

While I was in the codebase, I made a few other minor changes. I switched around the color palettes to favor more green and blue colors, and I use a different color to draw the beat connections when we make a jump.

1 Comment

Outside Lands Recommendations

I am at Outside Hacks this weekend – A hackathon associated with the Outside Lands music festival. For this hack I thought it would be fun to try out the brand new Your Music Library endpoints in the Spotify Web API. These endpoints let you inspect and manipulate the tracks that a user has saved to their music. Since the hackathon is all about building apps for a music festival, it seems natural to create a web app that gives you festival artist recommendations based upon your Spotify saved tracks. The result is the Outside Lands Recommender:

 

Your_Outside_Lands_Festival_Recommendations

 

The Recommender works by pulling in all the saved tracks from your Spotify ‘Your Music’ collection, aggregating the artists and then using the Echo Nest Artist Similar API to find festival artists that match or are similar to those artists. The Spotify API is then used to retrieve artist images and audio samples for the recommendations where they are presented in all of their bootstrap glory.

This was a pretty straight forward app, which was good since I only had about half the normal hacking time for a weekend hackathon. I spent the other half of the time building a festival dataset for hackers to use (as well as answering lots of questions about both the Spotify and Echo Nest APIs).

It has been a very fun hackathon. It is extremely well organized, the Weebly location is fantastic, and the quality of hackers is very high. I’ve already seen some fantastic looking hacks and we are still a few hours from demo time.  Plus, this happened.

Screenshot_7_27_14__12_32_PM

Two more victims

,

Leave a comment

Echo Nest Radio on Spotify

Spotify_-_The_Cult_–_Wild_Flower

I work for Spotify now – so for my Sunday morning programming project I thought I’d write a simple Spotify App that uses The Echo Nest API to create playlists based upon a seed song. I’ve done this before, but the last time was a few years ago and the Spotify Apps API has changed quite a bit since then, so I thought I’d use this as an opportunity to freshen my understanding of the Spotify API as well as to demonstrate how to write a Spotify App that uses The Echo Nest API.

I created an Echo Nest Radio app – it is a very simple app – it looks at what song you are currently playing and will generate an Echo Nest playlist based upon that seed song. The code is pretty straightforward. It grabs the Now Playing track from Spotify, gets the track’s ID and uses that as a seed for The Echo Nest song-radio static playlist API. This call returns Spotify track IDs (thanks to our Rosetta Stone ID mapping layer) that are then tossed into a temporary playlist, which is used to build a List view which is then displayed in the app. All told it is just over 100 lines of Javascript.

It did take me a bit of time to get the hang of the newer Spotify Apps API. It has changed quite a bit since I last used it and many of the examples that I relied on in the past, like Peter Watt’s excellent Kitchen Sink app, use an older version of the API. The new version has some significant changes including a nifty new module system along with a new approach to managing long-running queries that relies on promises. Once I got the hang of it, I decided that I like the new version – it makes for cleaner code and a much more efficient app since much less data needs to be moved around.

The app is on github – to use it you need to sign up for a developer account on Spotify and follow the rest of the Getting Started instructions (this means if you are not a developer, you’ll probably not be able to use the app).

The Spotify Apps API makes it super easy to be able to create apps that run inside Spotify. Its a very familiar programming environment for anyone who has done web programming. You can use all of your favorite libraries from jQuery to Lo-Dash to create really compelling apps that sit on top of the millions and millions of tracks in the Spotify catalog. However, unlike a web app where anyone can publish their app on the web, to publish a Spotify App you have to submit your app to the Spotify App approval process and only apps that Spotify approves are published. Spotify sets a high bar for what ultimately gets approved – which keeps the quality of the apps high, but also means that hacks and experiments built on the Spotify Apps platform will likely never be approved for release to the general public.  It’s a difficult balancing act for Spotify. They’ve built the ultimate music hacking platform with this API, but if they open the doors to every music hack, they will ultimately dilute the listening experience of the user – like so other many App stores that are flooded with garbage apps,  if they publish every app and hack then Spotify listeners would be inundated with the musical equivalent of flashlight and fart apps.  With the approval process, Spotify essentially says ‘the listener comes first’ which is the right choice.   Still, as a music hacker I do wish it was easier to publish rich music apps built on the Spotify platform. Luckily Spotify is committed to building an active and vibrant developer ecosystem so I don’t expect they we will be standing still.

Update 3/24/14: – I’ve added the ability to save these playlists back to Spotify, so you can take the Echo Nest radio playlists with you.

Second update 3/24/14 – note that Spotify’s recent announcement that they are closing app submissions means that you won’t be able to submit apps for publishing anymore, but you should be able to still create your own.

1 Comment

How the Autocanonizer works

Last week at the SXSW Music Hack Championship hackathon I built The Autocanonizer. An app that tries to turn any song into a canon by playing it against a copy of itself.  In this post, I explain how it works.

At the core of The Autocanonizer are three functions – (1) Build simultaneous audio streams for the two voices of the canon (2) Play them back simultaneously, (3) Supply a visualization that gives the listener an idea of what is happening under the hood.  Let’s look at each of these 3 functions:

(1A) Build simultaneous audio streams – finding similar sounding beats
The goal of the Autocanonizer is to fold a song in on itself in such a way that the result still sounds musical.  To do this, we use The Echo Nest analyzer and the jremix library to do much of the heavy lifting. First we use the analyzer to break the song down into beats. Each beat is associated with a timestamp, a duration, a confidence and a set of overlapping audio segments.  An audio segment contains a detailed description of a single audio event in the song. It includes harmonic data (i.e. the pitch content), timbral data (the texture of the sound) and a loudness profile.  Using this info we can create a Beat Distance Function (BDF) that will return a value that represents the relative distance between any two beats in the audio space. Beats that are close together in this space sound very similar, beats that are far apart sound very different. The BDF works by calculating the average distance between overlapping segments of the two beats where the distance between any two segments is a weighted combination of the euclidean distance between the pitch, timbral, loudness, duration and confidence vectors.  The weights control which part of the sound takes more precedence in determining beat distance. For instance we can give more weight to the harmonic content of a beat, or the timbral quality of the beat. There’s no hard science for selecting the weights, I just picked some weights to start with and tweaked them a few times based on how well it worked. I started with the same weights that I used when creating the Infinite Jukebox (which also relies on beat similarity), but ultimately gave more weight to the harmonic component since good harmony is so important to The Autocanonizer.

(1B)  Build simultaneous audio streams  – building the canon
The next challenge, and perhaps biggest challenge of the whole app, is to build the canon – that is  – given the Beat Distance Function, create two audio streams, one beat at a time, that sound good when played simultaneously. The first stream is easy, we’ll just play the beats in normal beat order. It’s the second stream, the canon stream that we have to worry about.  The challenge: put the beats in the canon stream in an order such that (1) the beats are in a different order than the main stream, and (2) they sound good when played with the main stream.

The first thing we can try is to make each beat in the canon stream be the most similar sounding beat to the corresponding beat in the main stream.  If we do that we end up with something that looks like this:

Someone_Like_You__autocanonized__by_Adele-2

It’s a rat’s nest of connections, very little structure is evident.  You can listen to what it sounds like by clicking here: Experimental Rat’s Nest version of Someone Like You (autocanonized).  It’s worth a listen to get a sense of where we start from.  So why does this bounce all over the place like this?  There are lots of reasons: First, there’s lots of repetition in music – so if I’m in the first chorus, the most similar beat may be in the second or third chorus – both may sound very similar and it is practically a roll of the dice which one will win leading to much bouncing between the two choruses. Second – since we have to find a similar beat for every beat, even beats that have no near neighbors have to be forced into the graph which turns it into spaghetti. Finally, the underlying beat distance function relies on weights that are hard to generalize for all songs leading to more noise.  The bottom line is that this simple approach leads to a chaotic and mostly non-musical canon with head-jarring transitions on the canon channel.  We need to do better.

There are glimmers of musicality in this version though. Every once in a while, the canon channel will remaining on a single sequential set of beats for a little while. When this happens, it sounds much more musical. If we can make this happen more often, then we may end up with a better sounding canon. The challenge then is to find a way to identify long consecutive strands of beats that fit well with the main stream.  One approach is to break down the main stream into a set of musically coherent phrases and align each of those phrases with a similar sounding coherent phrase. This will help us avoid the many head-jarring transitions that occur in the previous rat’s nest example. But how do we break a song down into coherent phrases? Luckily, it is already done for us. The Echo Nest analysis includes a breakdown of a song into sections – high level musically coherent phrases in the song – exactly what we are looking for. We can use the sections to drive the mapping.  Note that breaking a song down into sections is still an open research problem – there’s no perfect algorithm for it yet, but The Echo Nest algorithm is a state-of-the-art algorithm that is probably as good as it gets. Luckily, for this task, we don’t need a perfect algorithm. In the above visualization you can see the sections. Here’s a blown up version – each of the horizontal colored rectangles represents one section:

Someone_Like_You__autocanonized__by_Adele-2

You can see that this song has 11 sections. Our goal then is to get a sequence of beats for the canon stream that aligns well with the beats of each section. To make things at little easier to see, lets focus in on a single section. The following visualization shows the similar beat graph for a single section (section 3) in the song:

Someone_Like_You__autocanonized__by_Adele-10

You can see bundles of edges leaving section 3 bound for section 5 and 6.  We could use these bundles to find most similar sections and simply overlap these sections. However, given that sections are rarely the same length nor are they likely to be aligned to similar sounding musical events, it is unlikely that this would give a good listening experience. However, we can still use this bundling to our advantage. Remember, our goal is to find a good coherent sequence of beats for the canon stream. We can make a simplifying rule that we will select a single sequence of beats for the canon stream to align with each section. The challenge, then, is to simply pick the best sequence for each section. We can use these edge bundles to help us do this.  For each beat in the main stream section we calculate the distance to its most similar sounding beat.  We aggregate these counts and find the most commonly occurring distance. For example, there are 64 beats in Section 3.  The most common occurring jump distance to a sibling beat is 184 beats away.  There are ten beats in the section with a similar beat at this distance. We then use this most common distance of 184 to generate the canon stream for the entire section. For each beat of this section in the main stream, we add a beat in the canon stream that is 184 beats away from the main stream beat. Thus for each main stream section we find the most similar matching stream of beats for the canon stream. This visualizing shows the corresponding canon beat for each beat in the main stream.

Someone_Like_You__autocanonized__by_Adele-2

This has a number of good properties. First, the segments don’t need to be perfectly aligned with each other.  Note, in the above visualization that the similar beats to section 3 span across section 5 and 6. If there are two separate chorus segments that should overlap, it is no problem if they don’t start at the exactly the same point in the chorus. The inter-beat distance will sort it all out.  Second, we get good behavior even for sections that have no strong complimentary section.  For instance, the second section is mostly self-similar, so this approach aligns the section with a copy of itself offset by a few beats leading to a very nice call and answer.

Someone_Like_You__autocanonized__by_Adele-2

That’s the core mechanism of the autocanonizer  – for each section in the song, find the most commonly occurring distance to a sibling beat, and then generate the canon stream by assembling beats using that most commonly occurring distance.  The algorithm is not perfect. It fails badly on some songs, but for many songs it generates a really good cannon affect.  The gallery has 20 or so of my favorites.

 (2) Play the streams back simultaneously
When I first released my hack, to actually render the two streams as audio, I played each beat of the two streams simultaneously using the web audio API.  This was the easiest thing to do, but for many songs this results in some very noticeable audio artifacts at the beat boundaries.  Any time there’s an interruption in the audio stream there’s likely to be a click or a pop.  For this to be a viable hack that I want to show people I really needed to get rid of those artifacts.  To do this I take advantage of the fact that for the most part we are playing longer sequences of continuous beats. So instead of playing a single beat at a time, I queue up the remaining beats in the song, as a single queued  buffer.  When it is time to play the next beat, I check to see if is the same that would naturally play if I let the currently playing queue continue. If it is I ‘let it ride’ so to speak. The next beat plays seamlessly without any audio artifacts.  I can do this for both the main stream and the canon stream. This virtually elimianates all the pops and clicks.  However, there’s a complicating factor. A song can vary in tempo throughout, so the canon stream and the main stream can easily get out of sync. To remedy this, at every beat we calculate the accumulated timing error between the two streams. If this error exceeds a certain threshold (currently 50ms), the canon stream is resync’d starting from the current beat.  Thus, we can keep both streams in sync with each other while minimizing the need to explicitly queue beats that results in the audio artifacts.  The result is an audio stream that is nearly click free.

(3) Supply a visualization that gives the listener an idea of how the app works
I’ve found with many of these remixing apps, giving the listener a visualization that helps them understand what is happening under the hood is a key part of the hack. The first visualization that accompanied my hack was rather lame:

Let_It_Be__autocanonized__by_The_Beatles

It showed the beats lined up in a row, colored by the timbral data.  The two playback streams were represented by two ‘tape heads’ – the red tape head playing the main stream and the green head showing the canon stream.  You could click on beats to play different parts of the song, but it didn’t really give you an idea what was going on under the hood.  In the few days since the hackathon, I’ve spent a few hours upgrading the visualization to be something better.  I did four things: Reveal more about the song structure,  show the song sections, show, the canon graph and animate the transitions.

Reveal more about the song
The colored bars don’t really tell you too much about the song.  With a good song visualization you should be able to tell the difference between two songs that you know just by looking at the visualization.   In addition to the timbral coloring, showing the loudness at each beat should reveal some of the song structure.   Here’s a plot that shows the beat-by-beat loudness for the song stairway to heaven.

Stairway_To_Heaven__autocanonized__by_Led_Zeppelin

You can see the steady build in volume over the course of the song.  But it is still less than an ideal plot. First of all, one would expect the build for a song like Stairway to Heaven to be more dramatic than this slight incline shows.  This is because the loudness scale is a logarithmic scale.  We can get back some of the dynamic range by converting to a linear scale like so:

Stairway_To_Heaven__autocanonized__by_Led_Zeppelin

This is much better, but the noise still dominates the plot. We can smooth out the noise by taking a windowed average of the loudness for each beat. Unfortunately, that also softens the sharp edges so that short events, like ‘the drop’ could get lost. We want to be able to preserve the edges for significant edges while still eliminating much of the noise.  A good way to do this is to use a median filter instead of a mean filter.  When we apply such a filter we get a plot that looks like this:

Stairway_To_Heaven__autocanonized__by_Led_Zeppelin

The noise is gone, but we still have all the nice sharp edges.  Now there’s enough info to help us distinguish between two well known songs. See if you can tell which of the following songs is ‘A day in the life’ by The Beatles and which one is ‘Hey Jude’ by The Beatles.

A_Day_in_the_Life__autocanonized__by_The_Beatles

Which song is it? Hey Jude or A day in the Life?

07_-_Hey_Jude__autocanonized__by_Beatles__The

Which song is it? Hey Jude or A day in the Life?

Show the song sections
As part of the visualization upgrades I wanted to show the song sections to help show where the canon phrase boundaries are. To do this I created a the simple set of colored blocks along the baseline. Each one aligns with a section. The colors are assigned randomly.

Show the canon graph and animate the transitions.
To help the listener understand how the canon is structured, I show the canon transitions as arcs along the bottom of the graph. When the song is playing, the green cursor, representing the canon stream animates along the graph giving the listener a dynamic view of what is going on.  The animations were fun to do. They weren’t built into Raphael, instead I got to do them manually. I’m pretty pleased with how they came out.

Stairway_To_Heaven__autocanonized__by_Led_Zeppelin

All in all I think the visualization is pretty neat now compared to where it was after the hack. It is colorful and eye catching. It tells you quite a bit about the structure and make up of a song (including detailed loudness, timbral and section info). It shows how the song will be represented as a canon, and when the song is playing it animates to show you exactly what parts of the song are being played against each other.  You can interact with the vizualization by clicking on it to listen to any particular part of the canon.

Stairway_To_Heaven__autocanonized__by_Led_Zeppelin

Wrapping up  – this was a fun hack and the results are pretty unique. I don’t know of any other auto-canonizing apps out there. It was pretty fun to demo that hack at the SXSW Music Hack Championships too. People really seemed to like it and even applauded spontaneously in the middle of my demo.  The updates I’ve made since then – such as fixing the audio glitches and giving the visualization a face lift make it ready for the world to come and visit. Now I just need to wait for them to come.

,

Leave a comment

The Autocanonizer

I’ve spent the last 24 hours at the SXSW Music Hackathon championship. For my hack I’ve built something called The Autocanonizer. It takes any song and tries to make a canon out of it. A canon is a song that can be played against a copy of itself.  The Autocanonizer does this by looking at the detailed audio in the song (via The Echo Nest analysis), and looks for parts of the song that might overlap well. It builds a map of all these parts and when it plays the song it plays the main audio, while overlapping it with the second audio stream.  It doesn’t always work, but when it does, the results can be quite fun and sometimes quite pleasing.

To go along with the playback I created a visualization that shows the song and the two virtual tape heads that are playing the song. You can click on the visualization to hear particular bits.

Let_It_Be__autocanonized__by_The_Beatles

 

There are some audio artifacts on a few songs still. I know how to fix it, but it requires some subtle math (subtraction) that I’m sure I’ll mess up right before the hackathon demo if I attempt it now, so it will have to wait for another day. Also, there’s a Firefox issue that I will fix in the coming days.  Or you can go and fix all this yourself because the code is on github.

,

3 Comments

Let’s go to France to write some code

This past weekend was the fourth annual MIDEM Music Hack Day held in Cannes.  During about 48 hours 2 dozen or so hackers collected in a beautiful hacking space at the top of the Palais Des Festivals to build something cool with music and technology.

The MIDEM Music Hack Day is no ordinary Music Hack Day. It has a very limited enrollment so only hackers that have demonstrated the ability to create music hacks are invited.  Add to that, the fact that there is about 50% more time to build hacks and the result is a set of very high quality hacks.

Martyn giving the hack talk (photo by @sydlawerence)

Martyn giving the hack talk (photo by @sydlawerence)

Martyn Davies, master Music Hack day coordinator, kicked off  the event with a talk to the MIDEM attendees about what Music Hack Day is all about.  Martyn talked about the things that drive the hackers to spend their weekends hacking on code – in particular how the Music Hack Day is a chance to combined their love for music and technology, be creative and to build something new and cool during the weekend. Martyn demonstrated two representative hacks built at previous Music Hack Days.  First he showed the demo given by master hacker @sydlawrence of a hack called Disco Disco Tech. The excitement in Syd’s voice is worth the price of admission alone.

Next he showed one of my favorite Music Hack Day hacks of all time, Johnny Cash is Everywhere by @iainmullen

One of the special features of the MIDEM Music Hack Day is that non-hackers get to pitch their hacking ideas to the hackers about what apps they’d really like to see created over the weekend. There were a number of pitches ranging from a proposal for an artist-centric tool for organizing a creative music production team to a whimsical request to show what the music on the Internet sounds like when it decays. (Here’s one answer). All of the idea pitches were interesting, but here’s the secret. The hackers are not going to build your idea. It’s not because they don’t like your idea, it is because they already have tons of good ideas. The hackers are a very creative bunch, each with a long list of ideas waiting to be built. What the hackers usually lack is a solid block of time to implement their own ideas and so a hackathon is the perfect time to take that best idea on the list and work for a solid 24 hour to get it done.  It is rare for a hacker to get excited about building someone else’s idea, when they have so many of their own. As they say: “ideas are cheap, execution is everything“.

Once the opening talks concluded we hackers made their way up to the top of the Palais des Festivals (the heart of MIDEM) to our hacking space. It’s a great space with lots of natural light, a terrace that overlooks the French Riviera, and it is some distance away from the main conference so we were not bothered by stray walk-ons.

View from the hacker space

View from the hacker space

To kick things off, we went around the room introducing ourselves, briefly talking about our background skills, and ideas, and almost immediately got to hacking.  Since all the hackers were experienced hackers there was no need for the typical API workshops or learning sessions. Everyone knew, for instance, that I was from The Echo Nest and was ready to answer any questions about the Echo Nest API that should arise.

The hacking space (photo by @neomoha)

The hacking space (photo by @neomoha)

The next 46 hours was a blur of coding, punctuated by food delivery, the sound of the espresso maker and the occasional wandering pigeon.

Hacker Self Portraits

Yulile's hack in progress

Yulile’s hack in progress

There’s a math error lurking in there …

Screenshot_2_7_14__7_16_AM

@alastairporter does the math to work out the path of a record groove for his hack

Early morning coding on the French Riveria

Typical view for an international executive music hacker

Typical view from the international executive music hacker suite

The Hacks

Screenshot_2_7_14__10_43_AM

After 48 hours, we gathered in the Innovation Hall to demonstrate what we built. Each hacking team had about three minutes to show their stuff.  Eighteen hacks were built.  Here are some of my very favorites:

DJ Spotify  – built by Yuli Levtov – This is a real hacker’s hack. Yuli had a problem. He wanted to use Spotify when he DJed, but Spotify won’t let you beat match and cross fade songs. In fact, Spotify won’t even let you play two streams at once. So Yuli got to work to make it happen.  Along the way Yuli augmented his DJ playlists with BPM and key information from The Echo Nest (using a very clever growl hack).  One of the highlights of my MIDEM week was listening to Yuli try to explain what OS virtualization is and how Soundflower works to a room full of Music Biz types. Yuli has a detailed blog post that describes how his hack works. Yuli’s hack was voted the best hack by the hackers. Well done Yuli!

iPhoto-2That One Song – by Matt Ogle and Hannah Donovan – For this hack, Matt added a feature to his super popular This Is My Jam site. Type in any band and let the Jam community tell you the one song you should hear first. Plus: playback options, commentary, and an alternative “B-side” song recommendation for each artist.

iPhoto-2Skrillex Invader 20 – by Vivien Barousse – Imagine Guitar Hero meets Space Invaders meets Skrillex meets a Piano Keyboard. Skrillex Invader 20 is a small game designed to help you improve your skills on a piano keyboard.

Bang The Biebs – by Robin Johnson – A hack that combined the Leap Motion with a game. This hack was special, not only for the Bieber banging, but because Robin used the hack as a way to learn Javascript. This was his first JS app.

Bang-the-Biebs-490x230

Scratchy Record –  by Alastair Porter – Playing music from mp3s today has no soul. Scratchy Record reproduces the joy that can be had listening to music on vinyl. From the dirty needle causing extra noise, to the pops and skips that we all love, to the need to get up half way through the album and turn it over. Scratchy Record has it all.

iPhoto-2

HappyClappy – by Peck, ankit and mager – an IOS app that lets you queue up songs by clapping the rhythm. Uses The Echo Nest to search for songs by BPM.

iPhoto-2PhatStats – Syd Lawerence – Syd tried to build a sustainable subscription business during the hack with PhatStats – A new tool to discover up and coming talent across the social web, and to monitor your videos and their engagement levels across the social web.

PhatStats

This is your tour – Sam Phippen – Going on tour is hard. You’ve got to find someone to tour with. You’ve got to pick cities and venues. You’ve got to book hotels, find places to eat and drink. All of this takes far too much time.

This_is_your_tour

iPhoto-2Nikantas – Sabrina Leandro – a clever app that helps you learn a new language through music lyrics.  Fill in the blanks in lyrics of your favourite English, Spanish, Portuguese, French, German, or Italian artists. Can you recognise a word in a song? A word will be displayed on the screen, press the space key when the artist sings it

neoScores meets Deezer – Bob Hamblok – HTML5 sheet music score following inside Deezer.

neoScores_app_on_Deezer

Seevl hipster – by Alexandre Passant – Be a real hipster. Impress your friends with obscure music tastes. Do you want to impress your friend who’s into electro-folk, or that other one who only listens to avant-garde metal? Now you can! With seevl hipster, find obscure artists that match your friend tastes, and show-off on their Facebook wall.

Playlist Plus – Iain Mullan – Playlist Plus allows you to create a richer interactive version of a playlist. Add notes and comments to each track, to share with friends, or distribute an in-depth album review. Like a particular lyric? Bookmark it at the exact timestamp in the song. Think a track has a heavy Zeppelin influence? Link to the song/album it reminds you of. Cover version? Link to the original and let the reader stream it instantly from within your PlaylistPlus!

ScapeList – Mohammed Sordo – How does a landscape sound like? You take a picture of, let’s say, the Grand Canyon in Colorado, a la Instagram, but you also want to attach a song to it, a song that makes sense to you while you were taking that picture. Now imagine that other people went to the same place, took another picture of it but picked a different song. You end up with a playlist of songs related to that landscape, a ScapeList, curated by the users themselves, which you can listen to.

VideoFairy – by Suzie Blackman – A radio-style music discovery app designed for smart TV! It’s a bit like channel hopping, but for music videos. VideoFairy finds music videos from artists you’ll like with a simple interface that works with a remote control (use arrow keys and ‘enter’ on a keyboard).Designed for ‘lean back’ TV viewing with minimum interaction, you can sit back and watch new music recommended from your last.fm profile. Skip skip anything you don’t like with a simple tap of the remote.

Cannes Burn – my hack – a music visualizaton of Ellie Goulding’s Burn

I was unusually nervous and quite tired when I gave my demo, so I fell for a newbie demo mistake and had trouble getting my desktop to display properly. But when I finally did,  my demo went off smoothly.  I only had to say  a few words and hit the play button, so despite the nerves, it was a pretty easy demo to give. Here’s my view of the audience while giving the demo:

Demoing Cannes Burn to the MIDEM crowd

Demoing Cannes Burn to the MIDEM crowd

 

After we presented the hacks the hackers themselves voted for the best hack which went to Yuli for his amazing DJ Spotify.  Yuli is quite the gracious and humble winner, making sure everyone got a glass of his winning champagne.

Post Hack
After all the hacking the exhausted hackers took some time to kick back, have a good dinner, a few drinks and long conversations into the night about life as an international music hacker. 

Post hack drinks with @sydlawrence @iainmullen @alastairporter @saleandro @lucyeblair

Post hack drinks with @saleandro @sydlawrence @alastairporter @iainmullen and @lucyeblair

Outro

Halfway through the MIDEM Music Hack Day I paused to take stock. Here I was, on the other side of the world sitting at the top of the Palais des Festival, overlooking the French Riviera, surrounded by friends and writing code. It was a great place to be, and I felt very fortunate to be there. This was all possible because the music biz folks realize that we hackers have lots of ideas that will advance the state-of-the-art in music tech, and even more importantly we have the ability to actually turn those ideas into reality. And so, they treat us very well. It is good to be a music hacker.

Leave a comment

Creative hacking

My hack at the MIDEM Music Hack Day this year is what I’d call a Creative Hack. I built it, not because it answered any business use case or because it demonstrated some advanced capability of some platform or music tech ecosystem, I built it because I was feeling creative and I wanted to express my creativity in the best way that I can which is to write a computer program. The result is something I’m particularly proud of. It’s a dynamic visualization of the song Burn by Ellie Goulding.   Here’s a short, low-res excerpt, but I strongly suggest that you go and watch the full version here: Cannes Burn

[youtube http://www.youtube.com/watch?v=Fys0RGi3kA8&feature=youtu.be]

Unlike all of the other hacks that I’ve built, this one feels really personal to me.  I wasn’t just trying to solve a technical problem. I was trying to capture the essence of the song in code, trying to tell its story and maybe even touch the viewer.  The challenge wasn’t in the coding it was in the feeling.

After every hack day, I’m usually feeling a little depressed.  I call it post-hacking depression. It is partially caused by being sleep deprived for 48 hours, but the biggest component is that I’ve put my all into something for 48 hours and then it is just over. The demo is done, the code is checked into github, the app is deployed online and people are visiting it (or not). The thing that just totally and completely took over my life for two days is completely gone. It is easy to reflect back on the weekend and wonder if all that time and energy was worth it.

Monday night after the MIDEM hack day was over I was in the midst of my post-hack depression sitting in a little pub called Le Crillon when a guy came up to me and said “I saw your hack.  It made me feel something. Your hack moved me.”

Cannes Burn won’t be my post popular hack, nor is it my most challenging hack, but it may be my favorite hack because I was able to write some code and make somebody that I didn’t know feel something.

 

1 Comment

Million Song Shuffle

20111023_ipod_jpg__1337×1563_Back in 2001 when the first iPod was released, Shuffle Play was all the rage. Your iPod had your 1,000 favorite songs on it, so picking songs at random to play created a pretty good music listening experience. Today, however, we don’t have 1,000 songs in our pocket. With music services like Rdio, Rhapsody or Spotify, we are walking around with millions of songs in our pocket. I’ve often wondered what it would be like to use Shuffle Play when you have millions of songs to shuffle through. Would it be a totally horrible listening experience listening to artists that are so far down the long tail that they don’t even know that they are part of a dog? Would you suffer from terminal iPod whiplash as you are jerked between Japanese teen pop and a John Philip Sousa march?

To answer these questions, I built an app called Million Song Shuffle. This app will create a playlist by randomly selecting songs from a pool of many millions of songs. It draws from the Rdio collection and if you are an Rdio user you can hear listen to the full tracks.

Cytotoxin

The app also takes advantage of a nifty new set of data returned by the Echo Nest API. It shows you the absolute hotttnesss rank for the song and the artist, so you will always know how deep you are into the long tail (answer: almost always, very deep).

So how is listening to millions of songs at random? Surprisingly, it’s not too bad. The playlist certainly gets a high score for eclecticism and surprise, and most of the time the music is quite listenable. But give it a try, and form your own opinion.

Its fun, too, to see how long you can listen to the Million Song Shuffle before you encounter a song or even an artist that you’ve heard of before. If the artist is not in the top 5K artists, it is likely you’ve never heard of them. After listening to Million Song Shuffle for a little while you start to get an idea of how much music there is out there. There’s a lot.

For the ultimate eclectic music listening experience, try the Million Song Shuffle.

, ,

1 Comment

Attention Deficit Radio

This weekend, I’ve been in London, attending the London Music Hack Day. For this weekend’s hack, I was inspired by daughter’s music listening behavior – when she listens to music, she is good for the first verse or two and the chorus, but after that, she’s on to the next song. She probably has never heard a bridge.  So for my daughter, and folks like her with short attention spans, I’ve built Attention Deficit Radio. ADR creates a Pandora-like radio station based upon a seed artist, but doesn’t bother you by playing whole songs. Instead, after about 30 seconds or so, it is on to the next song. The nifty bit is that ADR will try to beat-match and crossfade between the songs giving you a (hopefully) seamless listening experience as you fly through the playlist. Of course those with short attention spans need something to look at while listening, so ADR has lots of gauges that show the radio status – it shows the current beat, the status of the cross-fade, tempo and song loading status.

Attention_Deficit_Radio

There may be a few rough edges, and the paint is not yet dry, but give Attention Deficit Radio a try if you have a short listening attention span.

,

3 Comments