Posts Tagged python
Wicked smart playlists
Over the past few weekends I’ve been working on a little side project called the Playlist Builder Library (or PBL for short). The Playlist Builder Library is a Python library for creating and manipulating playlists. It’s sort of like remix for playlists. With PBL you can take songs from playlists, albums, artists, genres and flexibly combined them, rearrange them, filter them and sort them into new playlists.
For example, here’s a PBL program that creates radio station of today’s top hits but guarantees that every 4th song is either by Sia or Katy Perry:
[gist https://gist.github.com/plamere/2fa839150815f040450d]Here’s the resulting playlist:
[spotify spotify:user:plamere:playlist:6TIeQMve7pVBLCAY8WUX3L]That’s 5 lines a code to create a non-trivial playlist.
PBL supports all sorts of sources for tracks such as Spotify playlists, top tracks from artists, albums, genres, the extremely flexible and powerful Echo Nest playlisting API. These sources can be manipulated in all sorts of interesting ways. Here are a couple more examples:
You can filter all the songs in ‘Your favorite coffeehouse’ to get just the lowest energy songs:
coffee = PlaylistSource('coffeehouse', ucoffee_house) low_energy_coffee = AttributeRangeFilter(coffee, 'echonest.energy', max_val=.5)
You an combine your favorite playlists in a single one:
playlist_names = ['Your Favorite Coffeehouse', 'Acoustic Summer','Acoustic Covers', 'Rainy Day'] all = DeDup(Alternate([Sample(PlaylistSource(n), 10) for n in playlist_names]))
Even sophisticated tasks are really easy. For instance, imagine dad is on a roadtrip with daughter. They agree to alternate between dad’s music and daughter’s music. Dad is selfish, so he makes a playlist that alternates the longest cool jazz tracks with the shortest teen party playlists with this 3 line script:
teen_party = First(Sorter(PlaylistSource('Teen Party'), 'duration'), 10) jazz_classics = Last(Sorter(PlaylistSource('Jazz Classics'), 'duration'), 10) both = Alternate([teen_party, Reverse(jazz_classics)])
Here’s the result
[spotify spotify:user:plamere:playlist:0VKGTR6eCPe55bBjezi5z3]Note that the average duration of Teen Party songs is much less than 3 minutes, while the average duration of Jazz Classics is above 6 minutes. Selfish dad gets to listen to his music twice as long with this jazz-skewing playlist.
There’s a whole lot of nifty things that can be done with PBL. If you are a Python programmer with an itch for creating new playlists check it out. The docs are online at http://pbl.readthedocs.org/ and the source is at https://github.com/plamere/pbl.
PBL is pretty modular so it is easy to add new sources and manipulators, so if you have an idea or two for changes let me know or just send me a pull request.
Spotify iOS token exchange service in python
On the very same day that Spotify announced its acquisition of The Echo Nest they released a brand new Spotify iOS SDK. Trying this new SDK out has been high on my priority list, and finally after a few crazy weeks I’ve had a bit of time to take it for a test drive. I walked through the beginner’s tutorial and was up and running with an iOS app running in the simulator in about 30 minutes. Easy Peazy! The bit that took the longest was setting up the token exchange service. This is a service that you need to run on your own server as part of the authentication process. The tutorial provides such a sample service written in ruby, however I’m not a ruby programmer so I had to go through all the gyrations of installing ruby, figuring out how to install gems and getting the required gems installed. Once I had everything installed it worked fine and I was able to get the tutorial running. However, I figure that I’ll be working with the iOS SDK a great deal in my future, and I’d rather not have to deal with a ruby server every time I create a new app, and so for my Sunday morning programming project I’ve re-written the ruby swap service in python. The service is on github here: spotify_token_swap
If you are going to be using the new Spotify iOS SDK to create apps and you’d rather deal with python than ruby, then you might find it useful.
New Genre APIs
Posted by Paul in code, Music, The Echo Nest, web services on January 16, 2014
Today at the Echo Nest we are pushing out an update to our Genre APIs. The new APIs lets you get all sorts of information about any of over 800 genres including a description of the genre, representative artists in the genre, similar genres, and links to web resources for the genre (such as a wikipedia page, if one exists for a genre). You can also use the genres to create various types of playlists. With these APIs you build all sorts of music exploration apps like Every Noise At Once, Music Popcorn and Genre-A-Day.
The new APIs are quite simple to use. Here are a few python examples created using pyen.
List all of the available genres with a description
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import pyen | |
en = pyen.Pyen() | |
response = en.get('genre/list', bucket=['description']) | |
for g in response['genres']: | |
print g['name'], '-', g['description'] |
This outputs text like so:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
a cappella – A cappella is singing without instrumental accompaniment. From the Italian for "in the manner of the chapel," a cappella may be performed solo or by a group. | |
abstract hip hop – | |
acid house – From house music came acid house, developed in the mid-'80s by Chicago DJs experimenting with the Roland TB-303 synthesizer. That instrument produced the subgenre's signature squelching bass, used to create a hypnotic sound. | |
acid jazz – Acid jazz, also called club jazz, is a style of jazz that takes cues from a number of genres, including funk, hip-hop, house, and soul. | |
… |
We can get the top artists for any genre like so:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import pyen | |
import sys | |
en = pyen.Pyen() | |
if len(sys.argv) > 1: | |
genre = ' '.join(sys.argv[1:]) | |
response = en.get('genre/artists', name=genre) | |
for artist in response['artists']: | |
print artist['name'] | |
else: | |
print "usage: python top_artists_for_genre.py genre name" |
Here are the top artists for ‘cool jazz’
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
% python top_artists_for_genre.py cool jazz | |
Thelonious Monk | |
Stan Getz | |
Lee Konitz | |
The Dave Brubeck Quartet | |
Bill Evans | |
Cannonball Adderley | |
Art Pepper | |
Charlie Parker | |
John Coltrane | |
Gil Evans | |
Ahmad Jamal | |
Miles Davis | |
Horace Silver | |
Dave Brubeck | |
Oliver Nelson |
We can find similar genres to any genre with this bit of code:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import pyen | |
import sys | |
en = pyen.Pyen() | |
if len(sys.argv) > 1: | |
genre = ' '.join(sys.argv[1:]) | |
response = en.get('genre/similar', name=genre) | |
for genre in response['genres']: | |
print genre['name'] | |
else: | |
print "usage: python sim_genres.py genre name" |
Sample output:
% python sim_genres.py cool jazz bebop jazz hard bop contemporary post-bop soul jazz big band jazz christmas stride jazz funk jazz fusion avant-garde jazz free jazz
We can use the genres to create excellent genre playlists. To do so, create a playlist of type ‘genre-radio’ and give the genre name as a seed. We’ve also added a new parameter called ‘genre_preset’ that, if specified will control the type of songs that will be added to the playlist. You can chose from core, in_rotation, and emerging. Core genre playlists are great for introducing a new listener to the genre. Here’s a bit of code that generates a core playlist for any genre:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import pyen | |
import sys | |
en = pyen.Pyen() | |
if len(sys.argv) < 2: | |
print 'Usage: python genre_playlist.py seed genre name' | |
else: | |
genre = ' '.join(sys.argv[1:]) | |
response = en.get('playlist/static', type='genre-radio', genre_preset='core-best', genre=genre) | |
for i, song in enumerate(response['songs']): | |
print "%d %s by %s" % ((i +1), song['title'], song['artist_name']) |
The core classic rock playlist looks like this:
- Simple Man by Lynyrd Skynyrd
- Born To Be Wild by Steppenwolf
- All Along The Watchtower by Jimi Hendrix
- Kashmir by Led Zeppelin
- Sunshine Of Your Love by Cream
- Let’s Work Together by Canned Heat
- Gimme Shelter by The Rolling Stones
- It’s My Life by The Animals
- 30 Days In The Hole by Humble Pie
- Midnight Rider by The Allman Brothers Band
- The Joker by Steve Miller Band
- Fortunate Son by Creedence Clearwater Revival
- Black Betty by Ram Jam
- Heart Full Of Soul by The Yardbirds
- Light My Fire by The Doors
The ‘in rotation’ classic rock playlist looks like this:
- Heaven on Earth by Boston
- Doom And Gloom by The Rolling Stones
- Little Black Submarines by The Black Keys
- I Gotsta Get Paid by ZZ Top
- Fly Like An Eagle by Steve Miller Band
- Blue On Black by Kenny Wayne Shepherd
- Driving Towards The Daylight by Joe Bonamassa
- When A Blind Man Cries by Deep Purple
- Over and Over (Live) by Joe Walsh
- The Best Is Yet To Come by Scorpions
- World Boss by Gov’t Mule
- One Way Out by The Allman Brothers Band
- Corned Beef City by Mark Knopfler
- Bleeding Heart by Jimi Hendrix
- My Sharona by The Knack
While the emerging ‘classic rock’ playlist looks like this:
- If You Were in Love by Boston
- Beggin’ by Shocking Blue
- Speak Now by The Answer
- Mystic Highway by John Fogerty
- Hell Of A Season by The Black Keys
- No Reward by Gov’t Mule
- Pretty Wasted by Tito & Tarantula
- The Battle Of Evermore by Page & Plant
- I Got All You Need by Joe Bonamassa
- What You Gonna Do About Me by Buddy Guy
- I Used To Could by Mark Knopfler
- Wrecking Ball by Joe Walsh
- The Circle by Black Country Communion
- You Could Have Been a Lady by April Wine
- 15 Lonely by Walter Trout
The new Genre APIs are really quite fun to use. I’m looking forward to seeing a whole new world of music exploration and discovery apps built around these APIs.
Finding duplicate songs in your music collection with Echoprint
Posted by Paul in code, Music, The Echo Nest on June 25, 2011
This week, The Echo Nest released Echoprint – an open source music fingerprinting and identification system. A fingerprinting system like Echoprint recognizes music based only upon what the music sounds like. It doesn’t matter what bit rate, codec or compression rate was used (up to a point) to create a music file, nor does it matter what sloppy metadata has been attached to a music file, if the music sounds the same, the music fingerprinter will recognize that. There are a whole bunch of really interesting apps that can be created using a music fingerprinter. Among my favorite iPhone apps are Shazam and Soundhound – two fantastic over-the-air music recognition apps that let you hold your phone up to the radio and will tell you in just a few seconds what song was playing. It is no surprise that these apps are top sellers in the iTunes app store. They are the closest thing to magic I’ve seen on my iPhone.
In addition to the super sexy applications like Shazam, music identification systems are also used for more mundane things like copyright enforcement (helping sites like Youtube keep copyright violations out of the intertubes), metadata cleanup (attaching the proper artist, album and track name to every track in a music collection), and scan & match like Apple’s soon to be released iCloud music service that uses music identification to avoid lengthy and unnecessary music uploads. One popular use of music identification systems is to de-duplicate a music collection. Programs like tuneup will help you find and eliminate duplicate tracks in your music collection.
This week I wanted to play around with the new Echoprint system, so I decided I’d write a program that finds and reports duplicate tracks in my music collection. Note: if you are looking to de-duplicate your music collection, but you are not a programmer, this post is *not* for you, go and get tuneup or some other de-duplicator. The primary purpose of this post is to show how Echoprint works, not to replace a commercial system.
How Echoprint works
Echoprint, like many music identification services is a multi-step process: code generation, ingestion and lookup. In the code generation step, musical features are extracted from audio and encoded into a string of text. In the ingestion step, codes for all songs in a collection are generated and added to a searchable database. In the lookup step, the codegen string is generated for an unknown bit of audio and is used as a fuzzy query to the database of previously ingested codes. If a suitably high-scoring match is found, the info on the matching track is returned. The devil is in the details. Generating a short high level representation of audio that is suitable for searching that is insensitive to encodings, bit rate, noise and other transformations is a challenge. Similarly challenging is representing a code in a way that allows for high speed querying and allows for imperfect matching of noisy codes.
Echoprint consists of two main components: echoprint-codegen and echoprint-server.
Code Generation
echoprint-codegen is responsible for taking a bit of audio and turning it into an echoprint code. You can grab the source from github and build the binary for your local platform. The binary will take an audio file as input and give output a block of JSON that contains song metadata (that was found in the ID3 tags in the audio) along with a code string. Here’s an example:
plamere$ echoprint-codegen test/unison.mp3 0 10 [ {"metadata":{"artist":"Bjork", "release":"Vespertine", "title":"Unison", "genre":"", "bitrate":128,"sample_rate":44100, "duration":405, "filename":"test/unison.mp3", "samples_decoded":110296, "given_duration":10, "start_offset":1, "version":4.11, "codegen_time":0.024046, "decode_time":0.641916}, "code_count":174, "code":"eJyFk0uyJSEIBbcEyEeWAwj7X8JzfDvKnuTAJIojWACwGB4QeM\ HWCw0vLHlB8IWeF6hf4PNC2QunX3inWvDCO9WsF7heGHrhvYV3qvPEu-\ 87s9ELLi_8J9VzknReEH1h-BOKRULBwyZiEulgQZZr5a6OS8tqCo00cd\ p86ymhoxZrbtQdgUxQvX5sIlF_2gUGQUDbM_ZoC28DDkpKNCHVkKCgpd\ OHf-wweX9adQycnWtUoDjABumQwbJOXSZNur08Ew4ra8lxnMNuveIem6\ LVLQKsIRLAe4gbj5Uxl96RpdOQ_Noz7f5pObz3_WqvEytYVsa6P707Jz\ j4Oa7BVgpbKX5tS_qntcB9G--1tc7ZDU1HamuDI6q07vNpQTFx22avyR", "tag":0} ]
In this example, I’m only fingerprinting the first 10 second of the song to conserve space. The code string is just a base64 encoding of a zlib compression of the original code string, which is a hex encoded series of ASCII numbers. A full version of this code is what is indexed by the lookup server for fingerprint queries. Codegen is quite fast. It scans audio at roughly 250x real time per processor after decoding and resampling to 11025 Hz. This means a full song can be scanned in less than 0.5s on an average computer, and an amount of audio suitable for querying (30s) can be scanned in less than 0.04s. Decoding from MP3 will be the bottleneck for most implementations. Decoders like mpg123 or ffmpeg can decode 30s mp3 audio to 11025 PCM in under 0.10s.
The Echoprint Server
The Echoprint server is responsible for maintaining an index of fingerprints of (potentially) millions of tracks and serving up queries. The lookup server uses the popular Apache Solr as the search engine. When a query arrives, the codes that have high overlap with the query code are retrieved using Solr. The lookup server then filters through these candidates and scores them based on a number of factors such as the number of codeword matches, the order and timing of codes and so on. If the best matching code has a high enough score, it is considered a hit and the ID and any associated metadata is returned.
To run a server, first you ingest and index full length codes for each audio track of interest into the server index. To perform a lookup, you use echoprint-codegen to generate a code for a subset of the file (typically 30 seconds will do) and issue that as a query to the server.
The Echo Nest hosts a lookup server, so for many use cases you won’t need to run your own lookup server. Instead , you can make queries to the Echo Nest via the song/identify call. (We also expect that many others may run public echoprint servers as well).
Creating a de-duplicator
With that quick introduction on how Echoprint works let’s look at how we could create a de-duplicator. The core logic is extremely simple:
create an empty echoprint-server foreach mp3 in my-music-collection: code = echoprint-codegen(mp3) // generate the code result = echoprint-server.query(code) // look it up if result: // did we find a match? print 'duplicate for', mp3, 'is', result else: // no, so ingest the code echoprint-server.ingest(mp3, code)
We create an empty fingerprint database. For each song in the music collection we generate an Echoprint code and query the server for a match. If we find one, then the mp3 is a duplicate and we report it. Otherwise, it is a new track, so we ingest the code for the new track into the echoprint server. Rinse. Repeat.
I’ve written a python program dedup.py to do just this. Being a cautious sort, I don’t have it actually delete duplicates, but instead, I have it just generate a report of duplicates so I can decide which one I want to keep. The program also keeps track of its state so you can re-run it whenever you add new music to your collection.
Here’s an example of running the program:
% python dedup.py ~/Music/iTunes
1 1 /Users/plamere/Music/misc/ABBA/Dancing Queen.mp3 ( lines omitted...) 173 41 /Users/plamere/Music/misc/Missy Higgins - Katie.mp3 174 42 /Users/plamere/Music/misc/Missy Higgins - Night Minds.mp3 175 43 /Users/plamere/Music/misc/Missy Higgins - Nightminds.mp3 duplicate /Users/plamere/Music/misc/Missy Higgins - Nightminds.mp3 /Users/plamere/Music/misc/Missy Higgins - Night Minds.mp3 176 44 /Users/plamere/Music/misc/Missy Higgins - This Is How It Goes.mp3
Dedup.py print out each mp3 as it processes it and as it finds a duplicate it reports it. It also collects a duplicate report in a file in pblml format like so:
duplicate <sep> iTunes Music/Bjork/Greatest Hits/Pagan Poetry.mp3 <sep> original <sep> misc/Bjork Radio/Bjork - Pagan Poetry.mp3 duplicate <sep> iTunes Music/Bjork/Medulla/Desired Constellation.mp3 <sep> original <sep> misc/Bjork Radio/Bjork - Desired Constellation.mp3 duplicate <sep> iTunes Music/Bjork/Selmasongs/I've Seen It All.mp3 <sep> original <sep> misc/Bjork Radio/Bjork - I've Seen It All.mp3
Again, dedup.py doesn’t actually delete any duplicates, it will just give you this nifty report of duplicates in your collection.
Trying it out
If you want to give dedup.py a try, follow these steps:
- Download, build and install echoprint-codegen
- Download, build, install and run the echoprint-server
- Get dedup.py.
- Edit line 10 in dedup.py to set the sys.path to point at the echoprint-server API directory
- Edit line 13 in dedup.py to set the _codegen_path to point at your echoprint-codegen executable
% python dedup.py ~/Music
This will find all of the dups and write them to the dedup.dat file. It takes about 1 second per song. To restart (this will delete your fingerprint database) run:
% python dedup.py --restart
Note that you can actually run the dedup process without running your own echoprint-server (saving you the trouble of installing Apache-Solr, Tokyo cabinet and Tokyo cabinet). The downside is that you won’t have any persistent server, which means that you’ll not be able to incrementally de-dup your collection – you’ll need to do it in all in one pass. To use the local mode, just add local-True to the fp.py calls. The index is then kept in memory, no solr or Tokyo tyrant is needed.
Wrapping up
dedup.py is just one little example of the kind of application that developers will be able to create using Echoprint. I expect to see a whole lot more in the next few months. Before Echoprint, song identification was out of the reach of the typical music application developer, it was just too expensive. Now with Echoprint, anyone can incorporate music identification technology into their apps. The result will be fewer headaches for developers and much better music applications for everyone.
Echo Nest Remix at the Boston Python Meetup Group
Posted by Paul in events, remix, The Echo Nest on July 15, 2010
Next week I’ll be giving a talk about remixing music with Echo Nest remix at the Boston Python Meetup Group. If you are in the Boston / Cambridge area next week, be sure to come on by and say ‘hi’. Info and RSVP for the talk are here: The Boston Python Meetup Group on Meetup.com
Here’s the abstract for the talk:
Paul Lamere will tell us about Echo Nest remix. Remix is an open source Python library for remixing music. With remix you can use Python to rearrange a track, combine it with others, beat/pitch shift it etc. – essentially it lets you treat a song like silly putty.
The Swinger is an interesting example of what it can do that made the rounds of the blogosphere: it morphs songs to give them a swing rhythm.
For more details about the type of music remixing you can do with remix, feel free to read: http://musicmachinery…
Python and Music at PyCon 2010
Posted by Paul in code, Music, The Echo Nest on February 15, 2010
If you are lucky enough to be heading to PyCon this week and are interested in hacking on music, there are two talks that you should check out:
DJing in Python: Audio processing fundamentals – In this talk Ed Abrams talks about how his experiences in building a real-time audio mixing application in Python. I caught a dry-run of this talk at the local Python SIG – lots of info packed into this 30 minute talk. One of the big takeaways from this talk is the results of Ed’s evaluation of a number of Pythonic audio processing libraries. Sunday 01:15pm, Centennial I
Remixing Music Pythonically – This is a talk by Echo Nest friend and über-developer Adam Lindsay. In this talk Adam talks about the Echo Nest remix library. Adam, a frequent contributor to remix, will offer details on the concise expressiveness offered when editing multimedia driven by content-based features, and some insights on what Pythonic magic did and didn’t work in the development of the modules. Audio and video examples of the fun-yet-odd outputs that are possible will be shown. Sunday 01:55pm, Centennial I
The schedulers at PyCon have done a really cool thing and have put the talks back to back in the same room. Also, keep your eye out for the Hacking on Music OpenSpace
The Echo Nest gets ready for Boston Music Hack Day
Posted by Paul in code, java, Music, The Echo Nest, web services on November 19, 2009
We’ve been extremely busy this week at the Echo Nest getting ready for the Boston Music Hack Day. Not only have we been figuring out menus, panel room assignments, and dealing with a waitlist, we’ve also been releasing a set of new API features. Here’s a quick rundown of what we’ve done:
- get_images – a frequent request from developers – we now have an API method that will let you get images for an artist. Note that we are releasing this method as a sneak preview for the hack day – we have images for over 60 thousand artists, but we will be aggressively adding more images over the next few weeks (60 thousand artists is a lot of artists, but we’d like to have lots more). We’ll also be expanding our sources of images to include many more sources. The results of the get_images are already good. 95% of the time you’ll get images. Over the next few weeks, the results will get even better.
- get_biographies – another frequent request from developers – we now have a get_biographies API method that will return a set of artist biographies for any artist. We currently have biographies for about a quarter million artists – and just as with get_images – we are working hard to expand the breadth and depth of this coverage. Nevertheless, with coverage for a quarter million artists, 99.99% of the time when you ask for a biography we’ll have it.
- get_similar – we’ve expanded the number of similar artists you can get back from get_similar from 15 to 100. This gives you lots more info for building playlisting and music discovery apps.
- buckets – one issue that our developers have had was that to fill out info on an artist often took a number of calls to the Echo Nest – one to get similars, one to get audio, one for video, familiarity, hotttnesss etc. To fill out an artist page it could take half a dozen calls. To reduce the number of calls needed to get artist information we’ve added a ‘bucket’ parameter to the search_artist, the get_similar and the get_profile calls. The bucket parameter allows you to specify which additional artist info should be returned in the call. You can specify ‘audio,’ ‘biographies,’ ‘blogs,’ ‘familiarity,’ ‘hotttnesss,’ ‘news,’ ‘reviews,’ ‘urls,’, ‘images’ or ‘video’ and whenever you get artist data back you’ll get the specified info included. For example with the call:
http://developer.echonest.com/api/get_profile ?api_key=EHY4JJEGIOFA1RCJP &id=music://id.echonest.com/~/AR/ARH6W4X1187B99274F &version=3 &bucket=familiarity &bucket=hotttnesss
will return an artist block that looks like this:
<artist> <name>Radiohead</name> <id>music://id.echonest.com/~/AR/ARH6W4X1187B99274F</id> <familiarity>0.899230928024</familiarity> <hotttnesss>0.847409181874</hotttnesss> </artist>
There’s another new feature that we are starting to roll out. It’s called Echo Source – it allows the developer to get content (such as images, audio, video etc.) based upon license info. Echo Source is a big deal and deserves a whole post – but that’s going to have to wait until after Music Hack Day. Suffice it to say that with Echo Source you’ll have a new level of control over what content the Echo Nest API returns.
We’ve updated our Java and Python libraries to support the new calls. So grab yourself an API key and start writing some music apps.
Artist radio in 10 lines of code
Posted by Paul in code, fun, Music, playlist, The Echo Nest, web services on July 16, 2009
Last week we released Pyechonest, a Python library for the Echo Nest API. Pyechonest gives the Python programmer access to the entire Echo Nest API including artist and track level methods. Now after 9 years working at Sun Microsystems, I am a diehard Java programmer, but I must say that I really enjoy the nimbleness and expressiveness of Python. It’s fun to write little Python programs that do the exact same thing as big Java programs. For example, I wrote an artist radio program in Python that, given a seed artist, generates a playlist of tracks by wandering around the artists in the neighborhood of the seed artists and gathering audio tracks. With Pyechonest, the core logic is 10 lines of code:
def wander(band, max=10): played = [] while max: if band.audio(): audio = random.choice(band.audio()) if audio['url'] not in played: play(audio) played.append(audio['url']) max -= 1 band = random.choice(band.similar()) (You can see/grab the full code with all the boiler plate in the SVN repository)
This method takes a seed artist (band) and selects a random track from set of audio that The Echo Nest has found on the web for that artist, and if we haven’t already played it, then do so. Then we select a near neighbor to the seed artist and do it all again until we’ve played the desired number of songs.
For such a simple bit of code, the playlists generated are surprisingly good..Here are a few examples:
Seed Artist: Led Zeppelin:
- You Shook Me by Led Zeppelin via licorice-pizza
- Suicide by Thin Lizzy via dmg541
- I Ain’t The One by Lynyrd Skynrd via artdecade
- Fortunate Son by Creedence Clearwater Revival via onesweetsong
- Susie-Q by Dale Hawkins via boogiewoogieflu
(I think the Dale Hawkins version of Susie-Q after CCR’s Fortunate Son is just brilliant)
Seed Artist: The Decemberists:
- The Wanting Comes In Waves/Repaid by The Decemberists via londononburgeoningmetropolis
- Amazing Grace by Sufjan Stevens via itallstarted
- Baby’s Romance by Chris Garneau via slowcoustic
- Saint Simon by The Shins via pastaprima
- Made Up Love Song #43 by Guillemots via merryswankster
(Note that audio for these examples is audio found on the web – and just like anything on the web the audio could go away at any time)
I think these artist-radio style playlists rival just about anything you can find on current Internet radio sites – which ain’t to0 bad for 10 lines of code.
Where’s the Pow?
This morning, while eating my Father’s day bagel, I got to play some more with the video aspects of the Echo Nest remix API. The video remix is pretty slick. You use all of the tools that you use in the audio remix, except that the object you are manipulating has a video component as well. This makes it easy to take an audio remix and turn it into a video remix. For instance, here’s the remix code to create a remix that includes the first beat of every bar:
audiofile = audio.LocalAudioFile(input_filename) collect = audio.AudioQuantumList() for bar in audiofile.analysis.bars: collect.append(bar.children()[0]) out = audio.getpieces(audiofile, collect) out.encode(output_filename)
To turn this into a video remix, just change the code to:
av = video.loadav(input_filename) collect = audio.AudioQuantumList() for bar in av.audio.analysis.bars: collect.append(bar.children()[0]) out = video.getpieces(av, collect) out.save(output_filename)
The code is nearly identical, differing in loading and saving, while the core remix logic stays the same.
To make a remix of a YouTube video, you need to save a local copy of the video. I’ve been using KeepVid to save local flv (flash video format) of any Youtube video.
Today I played with the track ‘Boom Boom Pow’ by the Black Eyed Peas. It’s a fun song for remix because it has a very strong beat, and already has a remix feel to it. And since the song is about digital transformation, it seems to be a good target for remix experiments. (and just maybe they won’t mind the liberties I’ve taken with their song).
Here’s the original (click through to YouTube to watch it since embedding is not allowed):
Just Boom
The first remix is to only include the first beat of every measure. The code is this:
for bar in av.audio.analysis.bars: collect.append(bar.children()[0])
Just Pow
Change the beat included from beat zero to beat three, and we get something that sounds very different:
Pow Boom Boom
Here’s a version with the beats reversed. The core logic for this transformation is one line of code:
av.audio.analysis.beats.reverse()
The 5/4 Version
Here’s a version that’s in 5/4 – to make this remix I duplicated the first beat and swapped beats 2 and 3. This is my favorite of the bunch.
These transformations are of the simplest variety, taking just a couple of minutes to code and try out. I’m sure some budding computational remixologist could do some really interesting things with this API.
Note that the latest video support is not in the main branch of remix. If you want to try some of this out you’ll need to check out the bl-video branch from the svn repository. But this is guaranteed to be rolled into the main branch before the upcoming Music Hackday. Update: the latest video support is now part of the main branch. If you want to try it out, check it out from the trunk of the SVN repository. So download the code, grab your API key and start remixing.
Update: As Brian pointed out in the comments there was some blocking on the remix renders. This has been fixed, so if you grab the latest code, the video output quality is as good as the input.
The Echo Nest remix 1.0 is released!
Posted by Paul in code, fun, Music, remix, The Echo Nest, web services on May 12, 2009
Version 1.0 of the Echo Nest remix has been released. Echo Nest Remix is an open source SDK for Python that lets you write programs that manipulate music. For example, here’s a python function that will take all the beats of a song, and reverse their order:
def reverse(inputFilename, outputFilename): audioFile = audio.LocalAudioFile(inputFilename) chunks = audioFile.analysis.beats chunks.reverse() reversedAudio = audio.getpieces(audioFile, chunks) reversedAudio.encode(outputFilename)
When you apply this to a song by The Beatles you get something that sounds like this:
which is surprisingly recognizable, musical – and yet different from the original.
Quite a few web apps have been written that use remix. One of my favorites is DonkDJ, which will ‘put a donk‘ on any song. Here’s an example: Hung Up by Madonna (with a Donk on it):
This is my jam lets you create mini-mixes to share with people.
And where would the web be without the ability to add more cowbell to any song.
There’s lots of good documentation already for remix. Adam Lindsay has created a most excellent overview and tutorial for remix. There’s API documentation and there’s documentation for the underlying Echo Nest web services that perform the audio analysis. And of course, the source is available too.
So, if you are looking for that fun summer coding project, or if you need an excuse to learn Python, or perhaps you are a budding computational remixologist download remix, grab an API key from the Echo Nest and start writing some remix code.
Here’s one more example of the fun stuff you can do with remix. Guess the song, and guess the manipulation: