Posts Tagged java

The BPM Explorer

Last month I wrote about using the Echo Nest API to analyze tracks to generate plots that you can use to determine whether or not a machine is responsible for setting the beat of a song.   I received many requests to analyze tracks by particular  artists, far too many for me to do without giving up my day job.   To satisfy this pent up demand for click track analysis I’ve written an application called the BPM Explorer that you let you create your own click plots.  With this application you can analyze any song in your collection, view its click plot and listen to your music, synchronized with the plot.  Here’s what the app looks like:

Check out the application here:  The Echo Nest BPM Explorer.  It’s written in Processing and deployed with Java Webstart, so it (should) just work.

My primary motiviation for writing this application was to check out the new Echo Nest Java Client to make sure that it was easy to use from Processing.   One of my secret plans is to get people in the Processing community interested in using the Echo Nest API.  The Processing community is filled with some  ultra-creative folks that have have strong artistic, programming and data visualization skills.   I’d love to see more song visualizations like this and this that are built using the Echo Nest APIs.  Processing is really cool – I was able to write the BPM explorer in just a few hours (it took me longer to remember how to sign jar files for webstart than it did to write the core plotter).    Processing strips away all of the boring parts of writing graphic programming (create a frame,  lay it out with a gridbag, make it visible,  validate, invalidate, repaint, paint arghh!). For processing, you just write a method ‘draw()’ that will be called 30 times a second.   I hope I get the chance to write more Processing programs.

Update: I’ve released the BPM Explorer code as open source – as part of the echo-nest-demos project hosted at google-code.  You can also browse the read  for the BPM Explorer.

, , , ,

11 Comments

New Java Client for the Echo Nest API

Today we are releasing a Java client library for the Echo Nest developer API.   This  library gives the Java programmer  full access to the Echo Nest developer API. The API includes artist-level methods such as getting artist news, reviews, blogs, audio, video, links,  familiarity, hotttnesss, similar artists, and so on.  The API also includes access to the renown track analysis API that will allow you to get a detailed musical analysis of any music track. This analysis includes loudness, mode, key, tempo, time signature, detailed beat structure, harmonic content, and timbre information for a track.

To use the API you need to get an Echo Nest developer key (it’s free) from developer.echonest.com.   Here are some code samples:

// a quick and dirty audio search engine
   ArtistAPI artistAPI = new ArtistAPI(MY_ECHO_NEST_API_KEY);

   List<Artist> artists = artistAPI.searchArtist("The Decemberists", false);
   for (Artist artist : artists) {
       DocumentList<Audio> audioList = artistAPI.getAudio(artist, 0, 15);
       for (Audio audio : audioList.getDocuments()) {
          System.out.println(audio.toString())
       }
   }
// find similar artists for weezer
   ArtistAPI artistAPI = new ArtistAPI(MY_ECHO_NEST_API_KEY);
   List<Artist> artists = artistAPI.searchArtist("weezer", false);
    for (Artist artist : artists) {
          List<Scored<Artist>> similars = artistAPI.getSimilarArtists(artist, 0, 10);
          for (Scored<Artist> simArtist : similars) {
              System.out.println("   " + simArtist.getItem().getName());
          }
     }

// Find the tempo of a track
    TrackAPI trackAPI = new TrackAPI(MY_ECHO_NEST_API_KEY);
    String id = trackAPI.uploadTrack(new File("/path/to/music/track.mp3"), false);
    AnalysisStatus status = trackAPI.waitForAnalysis(id, 60000);
    if (status == AnalysisStatus.COMPLETE) {
       System.out.println("Tempo in BPM: " + trackAPI.getTempo(id));
    }

There are some nifty bits in the API.  The API will cache data for you so frequently requested data (everyone wants the latest news about Cher) will be served up very quickly. The cache can be persisted, and the shelf-life for data in the cache can be set programmatically (the default age is one week).  The API will (optionally) schedule requests to ensure that you don’t exceed your call limit.   For those that like to look under the hood, you can turn on tracing to see what the method URL calls look like and see what the returned XML looks like.

If you are interested in kicking the tires of the Echo Nest API and you are a Java or Processing programmer, give the API a try.

If you have any questions / comments or problems abut the API  post to the Echo Nest Forums.

, ,

1 Comment

track upload sample code

One of the biggest pain points users have with the Echo Nest developer API is with the track upload method.  This method lets  you upload a track for analysis (which can be subsequently retrieved by a number of other API method calls such as get_beats, get_key, get_loudness and so on).   The track upload, unlike all of the other of The Echo Nest methods requires you to construct a multipart/form-data post request. Since I get a lot of questions about track upload I decided that I needed to actually code my own to get a full understanding of how to do it – so that (1) I could answer detailed questions about the process and (2) point to my code as an example of how to do it.   I could have used a library (such as the Jakarta http client library) to do the heavy lifting but I wouldn’t have learned a thing nor would I have some code to point people at.  So I wrote some Java code (part of the forthcoming Java Client for the Echo Nest web services) that will do the upload.

You can take a look at this post method in its google-code repository. The tricky bits about the multipart/form-data post is getting the multip-part form boundaries just right.  There’s a little dance one has to do with the proper carriage returns and linefeeds, and double-dash prefixes and double-dash suffixes and random boundary strings.  Debugging can be a pain in the neck too, because if you get it wrong,  typically the only diagnostic one gets is a ‘500 error’ which means something bad happened.

Track upload can also be a pain in the neck because you need to wait 10 or 20 seconds for the track upload to finish and for the track analysis to complete.  This time can be quite problematic if you have thousands of tracks to analyze.  20 seconds * one thousand tracks is about 8 hours.  No one wants to wait that long to analyze a music collection.  However, it is possible to short circuit this analysis.  You can skip the upload entirely if we already have performed an analysis on your track of interest.   To see if an analysis of a track is already available you can perform a query such as ‘get_duration’ using the MD5 hash of the audio file.  If you get a result back then we’ve already done the analysis and you can skip the upload and just use the MD5 hash of your track as the ID for all of your queries.   With all of the apps out there using the track analysis API, (for instance, in just a week, donkdj has already analyzed over 30K tracks) our database of pre-cooked analyses is getting quite large – soon I suspect that you won’t need to perform an upload of most tracks (certainly not mainstream tracks). We will already have the data.

, , , ,

7 Comments

The Echo Nest Remix SDK

One of the joys of working at the Echo Nest is the communal music playlist.  Anyone can add, rearrange or delete music from the queue.  Of course, if you need to bail out (like when that Cindi Lauper track is sending you over the edge) you can always put on your headphones and tune out the mix.   The other day,  George Harrison’s “Here Comes the Sun” started playing, but this was a new version – with a  funky drum beat, that I had never heard before – perhaps this was a lost track from the Beatle’s Love?  Nope, turns out it was just Ben, one of the Echo Nest developers, playing around with The Echo Nest Remix SDK.

The Echo Nest Remix  SDK is an open source Python library that lets you manipulate music and video.  It sits on top of the Echo Nest Analyze API, hides all of the messy details of sending audio back to the Echo Nest, and parsing the   XML response, while still giving  you access to the full power of the API.

remix – is one of The Echo Nest’s secret weapons – it gives you the ability to analyze and manipulate music – and not just audio manipulations such as filtering or equalizing, but the ability to remix based on the hierarchical structure of a song.  remix sits on top of a very deep analysis of the music that teases out all sorts of information about a track.  There’s high level information such as the key, tempo time signature, mode (major or minor) and overall loudness.   There’s also information about the song structure.  A song is broken down into sections (think verse, chorus, bridge, solo),  bars, beats, tatums (the smallest perceptual metrical unit of the song) and segments (short, uniform sound entities).  remix gives you access to all of this information.

I must admit that I’ve been a bit reluctant to use remix – mainly because after 9 years at Sun Microsystems I’m a hard core Java programmer (the main reason I went to Sun in the first place was because I liked Java so much).  Every time I start to use Python I get frustrated because it takes me 10 times longer than it would in Java. I have to look everything up.  How do I concatenate strings? How do I find the length of a list? How do I walk a directory tree?   I can code so much faster in Java. But … if there was ever a reason for me to learn Python it is this remix SDK.  It is just so much fun – and it lets you do some of the most incredible things.  For example, if you want to add a cowbell to every beat in a song, you can use  remix  to get the list of all of the beats (and associated confidences)  in a song, and simply overlap a cowbell strike at each of the time offsets.

So here’s my first bit of Python code using  remix. I grabbed one of the code samples that’s included in the distribution, had the aforementioned Ben spend two minutes walking me through the subtleties of Audio Quantum and I was good to go.    My first bit of code just takes a song and swaps beat two and beat three of all measures that have at least 3 beats.

def swap_beat_2_and_3(inputFile, outputFile):
    audiofile = audio.LocalAudioFile(inputFile)
    bars = audiofile.analysis.bars
    collect = audio.AudioQuantumList()
    for bar in bars:
        beats = bar.children()
        if (len(beats) >= 3):
            (beats[1], beats[2]) = (beats[2], beats[1])
        for beat in beats:
            collect.append(beat);
    out = audio.getpieces(audiofile, collect)
    out.encode(outputFile)

The  code analyzes the input, iterates through the bars and if a bar has more than three beats, swaps them. (I must admit, even as a hard core Java programmer, the ability to swap things with (a,b) = (b,a) is pretty awesome) and then  encodes and writes out a new audiofile.   The resulting audio is surprisingly musical. Here’s the result as applied to Maynard Ferguson’s “Birdland”:

(and speaking of cool, Soundcloud is a great place to post these remixes, it lets anyone attach a comment at any point in time on a track).

This is just great programming fun.  I think I’ll be spending my spare coding time learning more Python so I can explore all of the things one can do with remix.

, ,

3 Comments