Posts Tagged python

Artist radio in 10 lines of code

Last week we released Pyechonest, a Python library for the Echo Nest API.  Pyechonest gives the Python programmer access to the entire Echo Nest API including artist and track level methods.  Now after 9 years working at Sun Microsystems, I am a diehard Java programmer, but I must say that I really enjoy the nimbleness and expressiveness of Python.  It’s fun to write little Python programs that do the exact same thing as big Java programs.  For example, I wrote an artist radio program in Python that, given a seed artist, generates a playlist of tracks by wandering around the artists in the neighborhood of the seed artists and gathering audio tracks.   With Pyechonest, the core logic is 10 lines of code:

def wander(band, max=10):
   played = []
   while max:
     if band.audio():
         audio = random.choice(band.audio())
         if audio['url'] not in played:
             play(audio)
             played.append(audio['url'])
             max -= 1
     band = random.choice(band.similar())

(You can see/grab the full code with all the boiler plate in the SVN repository)

This method takes a seed artist (band) and selects a random track from set of audio that The Echo Nest has found on the web for that artist, and if we haven’t already played it, then do so. Then we select a near neighbor to the seed artist and do it all again until we’ve  played the desired number of songs.

For such a simple bit of code, the playlists generated are surprisingly good..Here are a few examples:

Seed Artist:  Led Zeppelin:

(I think the Dale Hawkins version of Susie-Q after  CCR’s Fortunate Son  is just brilliant)

Seed Artist: The Decemberists:

(Note that audio for these examples is audio found on the web – and just like anything on the web the audio could go away at any time)

I think these artist-radio style playlists rival just about anything you can find on current Internet radio sites – which ain’t to0 bad for 10 lines of code.

, , , , , ,

9 Comments

Where’s the Pow?

This morning, while eating my Father’s day bagel, I got to play some more with the video aspects of the Echo Nest remix API.  The video remix is pretty slick.  You use all of the tools that you use in the audio remix, except that the object you are manipulating has a video component as well.    This makes it easy to take an audio remix and turn it into a video remix.  For instance, here’s the remix code to create a remix that includes the first beat of every bar:

 audiofile = audio.LocalAudioFile(input_filename)
 collect = audio.AudioQuantumList()
 for bar in audiofile.analysis.bars:
     collect.append(bar.children()[0])
 out = audio.getpieces(audiofile, collect)
 out.encode(output_filename)

To turn this into a video remix, just change the code to:

 av = video.loadav(input_filename)
 collect = audio.AudioQuantumList()
 for bar in av.audio.analysis.bars:
     collect.append(bar.children()[0])
 out = video.getpieces(av, collect)
 out.save(output_filename)

The code is nearly identical, differing in loading and saving, while the core remix logic stays the same.

To make a remix of a YouTube video, you need to save a local copy of the video.   I’ve been using KeepVid to save local flv (flash video format) of any Youtube video.

Today I played with the track ‘Boom Boom Pow’ by the Black Eyed Peas.  It’s a fun song for remix because it has a very strong beat, and already has a remix feel to it.  And since the song is about digital transformation, it seems to be a good target for remix experiments.  (and just maybe they won’t mind the liberties I’ve taken with their song).

Here’s the original (click through to YouTube to watch it since embedding is not allowed):

Just Boom

The first remix is to only include the first beat of every measure.   The code is this:

    for bar in av.audio.analysis.bars:
         collect.append(bar.children()[0])

Just Pow

Change the beat included from beat zero to beat three, and we get something that sounds very different:

Pow Boom Boom

Here’s a version with the beats reversed.  The core logic for this transformation is one line of code:

av.audio.analysis.beats.reverse()

The 5/4 Version

Here’s a version that’s in 5/4 – to make this remix I duplicated the first beat and swapped beats 2 and 3.  This is my favorite of the bunch.

These transformations are of the simplest variety, taking just a couple of minutes to code and try out.   I’m sure some budding computational remixologist could do some really interesting things with this API.

Note that the latest video support is not in the main branch of remix.  If you want to try some of this out you’ll need to check out the bl-video branch from the svn repository.     But this is guaranteed to be rolled into the main branch before the upcoming Music Hackday. Update: the latest video support is now part of the main branch.  If you want to try it out, check it out from the trunk of the SVN repository. So download the code, grab your API key and start remixing.

Update: As Brian pointed out in the comments there was some blocking on the remix renders. This has been fixed, so if you grab the latest code, the video output quality is as good as the input.

, , , ,

14 Comments

The Echo Nest remix 1.0 is released!

Version 1.0 of the Echo Nest remix has been released. Echo Nest Remix is an open source SDK for Python that lets you write programs that  manipulate music.  For example, here’s a python function  that will take all the beats of a song, and reverse their order:

def reverse(inputFilename, outputFilename):
    audioFile = audio.LocalAudioFile(inputFilename)
    chunks = audioFile.analysis.beats
    chunks.reverse()
    reversedAudio = audio.getpieces(audioFile, chunks)
    reversedAudio.encode(outputFilename)

When you apply this to a song by The Beatles you get something that sounds like this:

which is surprisingly recognizable,  musical – and yet different from the original.

Quite a few web apps have been written that use remix.  One of my favorites is DonkDJ, which will ‘put a donk‘ on any song.  Here’s an example: Hung Up by Madonna (with a Donk on it):

This is my jam lets you create mini-mixes to share with people.

myjam

And where would the web be without the ability to add more cowbell to any song.

There’s lots of good documentation already for remix. Adam Lindsay has created a most excellent overview and tutorial for remix. There’s API documentation and there’s documentation for the underlying Echo Nest web services that perform the audio analysis.  And of course, the source is available too.

So, if you are looking for that fun summer coding project, or if you need an excuse to learn Python, or perhaps you are a budding computational remixologist download remix, grab an API key from the Echo Nest and start writing some remix code.

Here’s one more example of the fun stuff you can do with remix.   Guess the song, and guess the manipulation:

, ,

4 Comments

The Echo Nest Remix SDK

One of the joys of working at the Echo Nest is the communal music playlist.  Anyone can add, rearrange or delete music from the queue.  Of course, if you need to bail out (like when that Cindi Lauper track is sending you over the edge) you can always put on your headphones and tune out the mix.   The other day,  George Harrison’s “Here Comes the Sun” started playing, but this was a new version – with a  funky drum beat, that I had never heard before – perhaps this was a lost track from the Beatle’s Love?  Nope, turns out it was just Ben, one of the Echo Nest developers, playing around with The Echo Nest Remix SDK.

The Echo Nest Remix  SDK is an open source Python library that lets you manipulate music and video.  It sits on top of the Echo Nest Analyze API, hides all of the messy details of sending audio back to the Echo Nest, and parsing the   XML response, while still giving  you access to the full power of the API.

remix – is one of The Echo Nest’s secret weapons – it gives you the ability to analyze and manipulate music – and not just audio manipulations such as filtering or equalizing, but the ability to remix based on the hierarchical structure of a song.  remix sits on top of a very deep analysis of the music that teases out all sorts of information about a track.  There’s high level information such as the key, tempo time signature, mode (major or minor) and overall loudness.   There’s also information about the song structure.  A song is broken down into sections (think verse, chorus, bridge, solo),  bars, beats, tatums (the smallest perceptual metrical unit of the song) and segments (short, uniform sound entities).  remix gives you access to all of this information.

I must admit that I’ve been a bit reluctant to use remix – mainly because after 9 years at Sun Microsystems I’m a hard core Java programmer (the main reason I went to Sun in the first place was because I liked Java so much).  Every time I start to use Python I get frustrated because it takes me 10 times longer than it would in Java. I have to look everything up.  How do I concatenate strings? How do I find the length of a list? How do I walk a directory tree?   I can code so much faster in Java. But … if there was ever a reason for me to learn Python it is this remix SDK.  It is just so much fun – and it lets you do some of the most incredible things.  For example, if you want to add a cowbell to every beat in a song, you can use  remix  to get the list of all of the beats (and associated confidences)  in a song, and simply overlap a cowbell strike at each of the time offsets.

So here’s my first bit of Python code using  remix. I grabbed one of the code samples that’s included in the distribution, had the aforementioned Ben spend two minutes walking me through the subtleties of Audio Quantum and I was good to go.    My first bit of code just takes a song and swaps beat two and beat three of all measures that have at least 3 beats.

def swap_beat_2_and_3(inputFile, outputFile):
    audiofile = audio.LocalAudioFile(inputFile)
    bars = audiofile.analysis.bars
    collect = audio.AudioQuantumList()
    for bar in bars:
        beats = bar.children()
        if (len(beats) >= 3):
            (beats[1], beats[2]) = (beats[2], beats[1])
        for beat in beats:
            collect.append(beat);
    out = audio.getpieces(audiofile, collect)
    out.encode(outputFile)

The  code analyzes the input, iterates through the bars and if a bar has more than three beats, swaps them. (I must admit, even as a hard core Java programmer, the ability to swap things with (a,b) = (b,a) is pretty awesome) and then  encodes and writes out a new audiofile.   The resulting audio is surprisingly musical. Here’s the result as applied to Maynard Ferguson’s “Birdland”:

(and speaking of cool, Soundcloud is a great place to post these remixes, it lets anyone attach a comment at any point in time on a track).

This is just great programming fun.  I think I’ll be spending my spare coding time learning more Python so I can explore all of the things one can do with remix.

, ,

3 Comments