Posts Tagged remix

Earworm and Capsule at Music Hack Day San Francisco

 This weekend The Echo Nest is releasing some new remix functionality – Earworm and Capsule.  Earworm lets you create a new version of a song that is any length you want.  Would you like 2 minute version of Stairway to Heaven? Or a 3 hour version of Freebird? Or an Infinitely long version of Sex Machine?  Earworm can do that.   Here’s a 60 minute version of a little Rolling Stones ditty:

Capsule takes a list of tracks and optimizes the song transitions by reordering them and applying automatic beat matching and cross fading to give you a seamless playlist.  It is really neat stuff.    Here’s an example of a capsule between two Bob Marley songs:

It makes a nice little Bob Marley medley.

Jason writes about Capsule and Earworm and some other new features in remix  in his new (and rather awesome) blog:  Running With Data – Earworm and Capsule.  Check it out.

, ,

1 Comment

Bad Romance – the memento edition

At SXSW I  gave a talk about how computers can help make remixing music easier.  For the talk I created a few fun remixes.  Here’s one of my favorites.  It’s  a beat-reversed version of Lady Gaga’s Bad Romance.   The code to create it is here: vreverse.py

, ,

2 Comments

Here comes the antiphon

I’m gearing up for the SXSW panel on remix I’m giving in a couple of weeks.  I thought I should veer away from ‘science experiments’ and try to create some remixes that sound musical.  Here’s one where I’ve used remix to apply a little bit of a pre-echo to ‘Here Comes the Sun’.  It gives it a little bit of a call and answer feel:

The core (choir?) code is thus:

for bar in enumerate(self.bar):
 cur_data  = self.input[bar]
 if last:
     last_data = self.input[last]
     mixed_data = audio.mix(cur_data, last_data, mix=.3)
     out.append(mixed_data)
 else:
    out.append(cur_data)
 last = bar

, , , ,

Leave a comment

Rearranging the Machine

Last month I used Echo Nest remix to rearrange a Nickelback song (See From Nickelback to Bickelnack) by replacing each sound segment with another similar sounding segment.  Since Nickelback is notorious for their self-similarity,  a few commenters suggested that I try the remix with a different artist to see if the musicality stems from the remix algorithm or from Nickelback’s self-similarity.  I also had a few tweaks to the algorithm that I wanted to try out, so I gave it go. Instead of remixing Nickelback I remixed the best selling Christmas song of 2009 Rage Against The Machine’s ‘Killing in the Name’.

Here’s the remix using the exact same algorithm that was used to make the  Bickelnack remix:

Like the Bickelnack remix – this remix is still rather musical. (Source for this remix is here:  vafroma.py)

A true shuffle: One thing that is a bit unsatisfying about this algorithm is that it is not a true reshuffling of the input.  Since the remix algorithm is looking for the nearest match, it is possible for single segment to appear many times in the output while some segments may not appear at all.   For instance, of the 1140 segments that make up the original RATM Killing in the Name, only 706 are used to create the final output (some segments are used as many as 9 times in the output).   I wanted to make a version that was a true reshuffling, one that used every input segment exactly once in the output,  so I changed the inner remix loop to only consider unused segments for insertion.  The algorithm is a greedy one, so segments that occur early in the song have a bigger pool of replacement segments to draw on.  The result is that as the song progresses, the similarity of replacement segments tends to drop off.

I was curious to see how much extra error there was in the later segments, so I plotted the segment fitting error.  In this plot, the red line is the fitting error for the original algorithm and the green line is the fitting error for shuffling algorithm.  I was happy to see that for most of the song, there is very little extra error in the shuffling algorithm, things only get bad in the last 10% of the song.

You can hear see and hear the song decay as the pool of replacement segments diminish.  The last 30 seconds are quite chaotic.   (Remix source for this version is here: vafroma2.py)

More coherence: Pulling song segments from any part of a song to build a new version yields fairly coherent audio, however, the remixed video can be rather chaotic as it seems to switch cameras every second or so.    I wanted to make a version of the remix that would reduce the shifting of the camera.  To do this, I gave slight preference to consecutive segments when picking the next segment.  For example, if I’ve replaced segment 5 with segment 50, when it is time to replace segment 6, I’ll give segment 51 a little extra chance.  The result is that the output contains longer sequences of contiguous segments. – nevertheless no segment is ever in its original spot in the song.  Here’s the resulting version:

I find this version to be easier to watch.  (Source is here:  vafroma3.py).

Next experiments along these lines will be to draw segments from a large number of different songs by the same artist, to see if we can rebuild a song without using any audio from the source song.     I suspect that Nickelback will again prove themselves to be the masters of self-simlarity:

Here’s the original, un-remixed version of RATM- Killing in the name:

, , ,

1 Comment

Echo Nest analysis and visualization for Dopplereffekt – Scientist

, , ,

1 Comment

From Nickelback to Bickelnack

I saw that Nickelback  just received a Grammy nomination for Best Hard Rock Performance with their song ‘Burn it to the Ground’ and wanted to celebrate the event. Since Nickelback is known for their consistent sound, I thought I’d try to remix their Grammy-nominated performance to highlight their awesome self-similarity.  So I wrote a little code to remix ‘Burning to the Ground’ with itself.  The algorithm I used is pretty straightforward:

  1. Break the song down into smallest nuggets of sound (a.k.a segments)
  2. For each segment, replace it with a different segment that sounds most similar

I applied the algorithm to the music video.  Here are the results:

Considering that none of the audio is in its original order,  and 38% of the original segments are never used, the remix sounds quite musical and the corresponding video is quite watchable.  Compare to the original (warning, it is Nickelback):

Feel free to browse the source code, download remix and try creating your own.

, ,

13 Comments

DJ API’s secret sauce

Last week at the Echo Nest 4 year anniversary party we had two renown DJs keeping the music flowing.  DJ Rupture was the featured act – but opening the night was the Echo Nest’s own DJ API (a.k.a Ben Lacker)  who put together a 30 minute set using the Echo Nest remix.

 

DJ API at work (Photo by Jason Sundram

DJ API at work (Photo by Jason Sundram

I was really quite astounded at the quality of the tracks Ben put together (and all of them apparently done on the afternoon before the gig).  I asked Ben to explain how he created the tracks. Here’s what he said:

1. ‘One Thing’ – featuring Michael Jackson’s (dj api’s rip)

I found a half-dozen a cappella Michael Jackson songs as well as instrumental and a cappella recordings of Amerie’s “One Thing” on YouTube. To get Michael Jackson to sing “One Thing”, I stitched all his a cappella tracks together into a single track, then ran afromb: for each segment in the a cappella version of “One Thing”, I found the segment in the MJ a cappella medley that was closest in pitch, timbre, and loudness. The result sounded pretty convincing, but was heavy on the “uh”s and breath sounds. Using the pitch-shifting methods in modify.py, I shifted an a cappella version of “Ben” to be in the same key as “One Thing”, then ran afromb again. I edited together part of this result and part of the first result, then synced them up with the instrumental version of “One Thing.”

2. One Thing (dj api’s gamelan version)

I used afromb again here, this time resynthesizing the instrumental version of “One Thing” from the segments of a recording of a Balinese Gamelan Orchestra. I synced this with the a cappella version of “One Thing” and added some kick drums for a little extra punch

3. Billie Jean (dj api screwdown)

First I ran summary on an instrumental version of Beyoncé’s “Single Ladies (another YouTube find) to produce a version consisting only of the “ands” (every second eighth note). I then used modify.shiftRate to slow down an a cappella version of “Billie Jean” until its tempo matched that of the summarized “Single Ladies”. I synced the two, and repeated some of the final sections of “Single Ladies” to follow the form of “Billie Jean

It was a great set and everyone had a good time listening to the morphed tunes. At the next party hopefully we’ll get to see Ben do some live remix performance programming during the set (which of course won’t be a set, it will really be a python list).

, , ,

7 Comments

Follow

Get every new post delivered to your Inbox.

Join 1,191 other followers