Posts Tagged sxsw

How the Autocanonizer works

Last week at the SXSW Music Hack Championship hackathon I built The Autocanonizer. An app that tries to turn any song into a canon by playing it against a copy of itself.  In this post, I explain how it works.

At the core of The Autocanonizer are three functions – (1) Build simultaneous audio streams for the two voices of the canon (2) Play them back simultaneously, (3) Supply a visualization that gives the listener an idea of what is happening under the hood.  Let’s look at each of these 3 functions:

(1A) Build simultaneous audio streams – finding similar sounding beats
The goal of the Autocanonizer is to fold a song in on itself in such a way that the result still sounds musical.  To do this, we use The Echo Nest analyzer and the jremix library to do much of the heavy lifting. First we use the analyzer to break the song down into beats. Each beat is associated with a timestamp, a duration, a confidence and a set of overlapping audio segments.  An audio segment contains a detailed description of a single audio event in the song. It includes harmonic data (i.e. the pitch content), timbral data (the texture of the sound) and a loudness profile.  Using this info we can create a Beat Distance Function (BDF) that will return a value that represents the relative distance between any two beats in the audio space. Beats that are close together in this space sound very similar, beats that are far apart sound very different. The BDF works by calculating the average distance between overlapping segments of the two beats where the distance between any two segments is a weighted combination of the euclidean distance between the pitch, timbral, loudness, duration and confidence vectors.  The weights control which part of the sound takes more precedence in determining beat distance. For instance we can give more weight to the harmonic content of a beat, or the timbral quality of the beat. There’s no hard science for selecting the weights, I just picked some weights to start with and tweaked them a few times based on how well it worked. I started with the same weights that I used when creating the Infinite Jukebox (which also relies on beat similarity), but ultimately gave more weight to the harmonic component since good harmony is so important to The Autocanonizer.

(1B)  Build simultaneous audio streams  – building the canon
The next challenge, and perhaps biggest challenge of the whole app, is to build the canon – that is  – given the Beat Distance Function, create two audio streams, one beat at a time, that sound good when played simultaneously. The first stream is easy, we’ll just play the beats in normal beat order. It’s the second stream, the canon stream that we have to worry about.  The challenge: put the beats in the canon stream in an order such that (1) the beats are in a different order than the main stream, and (2) they sound good when played with the main stream.

The first thing we can try is to make each beat in the canon stream be the most similar sounding beat to the corresponding beat in the main stream.  If we do that we end up with something that looks like this:

Someone_Like_You__autocanonized__by_Adele-2

It’s a rat’s nest of connections, very little structure is evident.  You can listen to what it sounds like by clicking here: Experimental Rat’s Nest version of Someone Like You (autocanonized).  It’s worth a listen to get a sense of where we start from.  So why does this bounce all over the place like this?  There are lots of reasons: First, there’s lots of repetition in music – so if I’m in the first chorus, the most similar beat may be in the second or third chorus – both may sound very similar and it is practically a roll of the dice which one will win leading to much bouncing between the two choruses. Second – since we have to find a similar beat for every beat, even beats that have no near neighbors have to be forced into the graph which turns it into spaghetti. Finally, the underlying beat distance function relies on weights that are hard to generalize for all songs leading to more noise.  The bottom line is that this simple approach leads to a chaotic and mostly non-musical canon with head-jarring transitions on the canon channel.  We need to do better.

There are glimmers of musicality in this version though. Every once in a while, the canon channel will remaining on a single sequential set of beats for a little while. When this happens, it sounds much more musical. If we can make this happen more often, then we may end up with a better sounding canon. The challenge then is to find a way to identify long consecutive strands of beats that fit well with the main stream.  One approach is to break down the main stream into a set of musically coherent phrases and align each of those phrases with a similar sounding coherent phrase. This will help us avoid the many head-jarring transitions that occur in the previous rat’s nest example. But how do we break a song down into coherent phrases? Luckily, it is already done for us. The Echo Nest analysis includes a breakdown of a song into sections – high level musically coherent phrases in the song – exactly what we are looking for. We can use the sections to drive the mapping.  Note that breaking a song down into sections is still an open research problem – there’s no perfect algorithm for it yet, but The Echo Nest algorithm is a state-of-the-art algorithm that is probably as good as it gets. Luckily, for this task, we don’t need a perfect algorithm. In the above visualization you can see the sections. Here’s a blown up version – each of the horizontal colored rectangles represents one section:

Someone_Like_You__autocanonized__by_Adele-2

You can see that this song has 11 sections. Our goal then is to get a sequence of beats for the canon stream that aligns well with the beats of each section. To make things at little easier to see, lets focus in on a single section. The following visualization shows the similar beat graph for a single section (section 3) in the song:

Someone_Like_You__autocanonized__by_Adele-10

You can see bundles of edges leaving section 3 bound for section 5 and 6.  We could use these bundles to find most similar sections and simply overlap these sections. However, given that sections are rarely the same length nor are they likely to be aligned to similar sounding musical events, it is unlikely that this would give a good listening experience. However, we can still use this bundling to our advantage. Remember, our goal is to find a good coherent sequence of beats for the canon stream. We can make a simplifying rule that we will select a single sequence of beats for the canon stream to align with each section. The challenge, then, is to simply pick the best sequence for each section. We can use these edge bundles to help us do this.  For each beat in the main stream section we calculate the distance to its most similar sounding beat.  We aggregate these counts and find the most commonly occurring distance. For example, there are 64 beats in Section 3.  The most common occurring jump distance to a sibling beat is 184 beats away.  There are ten beats in the section with a similar beat at this distance. We then use this most common distance of 184 to generate the canon stream for the entire section. For each beat of this section in the main stream, we add a beat in the canon stream that is 184 beats away from the main stream beat. Thus for each main stream section we find the most similar matching stream of beats for the canon stream. This visualizing shows the corresponding canon beat for each beat in the main stream.

Someone_Like_You__autocanonized__by_Adele-2

This has a number of good properties. First, the segments don’t need to be perfectly aligned with each other.  Note, in the above visualization that the similar beats to section 3 span across section 5 and 6. If there are two separate chorus segments that should overlap, it is no problem if they don’t start at the exactly the same point in the chorus. The inter-beat distance will sort it all out.  Second, we get good behavior even for sections that have no strong complimentary section.  For instance, the second section is mostly self-similar, so this approach aligns the section with a copy of itself offset by a few beats leading to a very nice call and answer.

Someone_Like_You__autocanonized__by_Adele-2

That’s the core mechanism of the autocanonizer  – for each section in the song, find the most commonly occurring distance to a sibling beat, and then generate the canon stream by assembling beats using that most commonly occurring distance.  The algorithm is not perfect. It fails badly on some songs, but for many songs it generates a really good cannon affect.  The gallery has 20 or so of my favorites.

 (2) Play the streams back simultaneously
When I first released my hack, to actually render the two streams as audio, I played each beat of the two streams simultaneously using the web audio API.  This was the easiest thing to do, but for many songs this results in some very noticeable audio artifacts at the beat boundaries.  Any time there’s an interruption in the audio stream there’s likely to be a click or a pop.  For this to be a viable hack that I want to show people I really needed to get rid of those artifacts.  To do this I take advantage of the fact that for the most part we are playing longer sequences of continuous beats. So instead of playing a single beat at a time, I queue up the remaining beats in the song, as a single queued  buffer.  When it is time to play the next beat, I check to see if is the same that would naturally play if I let the currently playing queue continue. If it is I ‘let it ride’ so to speak. The next beat plays seamlessly without any audio artifacts.  I can do this for both the main stream and the canon stream. This virtually elimianates all the pops and clicks.  However, there’s a complicating factor. A song can vary in tempo throughout, so the canon stream and the main stream can easily get out of sync. To remedy this, at every beat we calculate the accumulated timing error between the two streams. If this error exceeds a certain threshold (currently 50ms), the canon stream is resync’d starting from the current beat.  Thus, we can keep both streams in sync with each other while minimizing the need to explicitly queue beats that results in the audio artifacts.  The result is an audio stream that is nearly click free.

(3) Supply a visualization that gives the listener an idea of how the app works
I’ve found with many of these remixing apps, giving the listener a visualization that helps them understand what is happening under the hood is a key part of the hack. The first visualization that accompanied my hack was rather lame:

Let_It_Be__autocanonized__by_The_Beatles

It showed the beats lined up in a row, colored by the timbral data.  The two playback streams were represented by two ‘tape heads’ – the red tape head playing the main stream and the green head showing the canon stream.  You could click on beats to play different parts of the song, but it didn’t really give you an idea what was going on under the hood.  In the few days since the hackathon, I’ve spent a few hours upgrading the visualization to be something better.  I did four things: Reveal more about the song structure,  show the song sections, show, the canon graph and animate the transitions.

Reveal more about the song
The colored bars don’t really tell you too much about the song.  With a good song visualization you should be able to tell the difference between two songs that you know just by looking at the visualization.   In addition to the timbral coloring, showing the loudness at each beat should reveal some of the song structure.   Here’s a plot that shows the beat-by-beat loudness for the song stairway to heaven.

Stairway_To_Heaven__autocanonized__by_Led_Zeppelin

You can see the steady build in volume over the course of the song.  But it is still less than an ideal plot. First of all, one would expect the build for a song like Stairway to Heaven to be more dramatic than this slight incline shows.  This is because the loudness scale is a logarithmic scale.  We can get back some of the dynamic range by converting to a linear scale like so:

Stairway_To_Heaven__autocanonized__by_Led_Zeppelin

This is much better, but the noise still dominates the plot. We can smooth out the noise by taking a windowed average of the loudness for each beat. Unfortunately, that also softens the sharp edges so that short events, like ‘the drop’ could get lost. We want to be able to preserve the edges for significant edges while still eliminating much of the noise.  A good way to do this is to use a median filter instead of a mean filter.  When we apply such a filter we get a plot that looks like this:

Stairway_To_Heaven__autocanonized__by_Led_Zeppelin

The noise is gone, but we still have all the nice sharp edges.  Now there’s enough info to help us distinguish between two well known songs. See if you can tell which of the following songs is ‘A day in the life’ by The Beatles and which one is ‘Hey Jude’ by The Beatles.

A_Day_in_the_Life__autocanonized__by_The_Beatles

Which song is it? Hey Jude or A day in the Life?

07_-_Hey_Jude__autocanonized__by_Beatles__The

Which song is it? Hey Jude or A day in the Life?

Show the song sections
As part of the visualization upgrades I wanted to show the song sections to help show where the canon phrase boundaries are. To do this I created a the simple set of colored blocks along the baseline. Each one aligns with a section. The colors are assigned randomly.

Show the canon graph and animate the transitions.
To help the listener understand how the canon is structured, I show the canon transitions as arcs along the bottom of the graph. When the song is playing, the green cursor, representing the canon stream animates along the graph giving the listener a dynamic view of what is going on.  The animations were fun to do. They weren’t built into Raphael, instead I got to do them manually. I’m pretty pleased with how they came out.

Stairway_To_Heaven__autocanonized__by_Led_Zeppelin

All in all I think the visualization is pretty neat now compared to where it was after the hack. It is colorful and eye catching. It tells you quite a bit about the structure and make up of a song (including detailed loudness, timbral and section info). It shows how the song will be represented as a canon, and when the song is playing it animates to show you exactly what parts of the song are being played against each other.  You can interact with the vizualization by clicking on it to listen to any particular part of the canon.

Stairway_To_Heaven__autocanonized__by_Led_Zeppelin

Wrapping up  – this was a fun hack and the results are pretty unique. I don’t know of any other auto-canonizing apps out there. It was pretty fun to demo that hack at the SXSW Music Hack Championships too. People really seemed to like it and even applauded spontaneously in the middle of my demo.  The updates I’ve made since then – such as fixing the audio glitches and giving the visualization a face lift make it ready for the world to come and visit. Now I just need to wait for them to come.

,

Leave a comment

The Autocanonizer

I’ve spent the last 24 hours at the SXSW Music Hackathon championship. For my hack I’ve built something called The Autocanonizer. It takes any song and tries to make a canon out of it. A canon is a song that can be played against a copy of itself.  The Autocanonizer does this by looking at the detailed audio in the song (via The Echo Nest analysis), and looks for parts of the song that might overlap well. It builds a map of all these parts and when it plays the song it plays the main audio, while overlapping it with the second audio stream.  It doesn’t always work, but when it does, the results can be quite fun and sometimes quite pleasing.

To go along with the playback I created a visualization that shows the song and the two virtual tape heads that are playing the song. You can click on the visualization to hear particular bits.

Let_It_Be__autocanonized__by_The_Beatles

 

There are some audio artifacts on a few songs still. I know how to fix it, but it requires some subtle math (subtraction) that I’m sure I’ll mess up right before the hackathon demo if I attempt it now, so it will have to wait for another day. Also, there’s a Firefox issue that I will fix in the coming days.  Or you can go and fix all this yourself because the code is on github.

,

3 Comments

Beyond the Play Button – the future of listening

I’ve created a page with all the supporting info for my SXSW Talk called Beyond The Play Button – the future of listening.

Beyond the Play Button – the future of listening

The page contains slides, links to all supporting data and links to all apps demoed.

Leave a comment

Top SXSW Music Panels for music exploration, discovery and interaction

SXSW 2014 PanelPicker has opened up. I took a tour through the SXSW Music panel proposals to highlight the ones that are of most interest to me … typically technical panels about music discovery and interaction. Here’s the best of the bunch. You’ll notice a number of Echo Nest oriented proposals. I’m really not shilling, I genuinely think these are really interesting talks (well, maybe I’m shilling for my talk).

 I’ve previously highlighted the best the bunch for SXSW Interactive

A Genre Map for Discovering the World of Music
Screenshot_5_22_13_11_01_AMAll the music ever made (approximately) is a click or two away. Your favorite music in the world is probably something you’ve never even heard of yet. But which click leads to it?

Most music “discovery” tools are only designed to discover the most familiar thing you don’t already know. Do you like the Dave Matthews Band? You might like O.A.R.! Want to know what your friends are listening to? They’re listening to Daft Punk, because they don’t know any more than you. Want to know what’s hot? It’s yet another Imagine Dragons song that actually came out in 2012. What we NEED are tools for discovery through exploration, not dictation.

This talk will provide a manic music-discovery demonstration-expedition, showcasing how discovery through exploration (The Echo Nest Discovery list & the genre mapping experiment, Every Noise at Once) in the new streaming world is not an opportunity to pay different people to dictate your taste, but rather a journey, unearthing new music JUST FOR YOU.

The Predictive Power of Music
Music taste is extremely personal and an important part of defining and communicating who we are.

Musical Identity, understanding who you are as a music fan and what that says about you, has always been a powerful indicator of other things about you. Broadcast radio’s formats (Urban, Hot A/C, Pop, and so on) are based on the premise that a certain type of music attracts a certain type of person. However, the broadcast version of Musical Identity is a blunt instrument, grouping millions of people into about 12 audience segments. Now that music has become a two-way conversation online, Musical Identity can become considerably more precise, powerful, and predictive.

In this talk, we’ll look at why music is one of the strongest predictors and how music preference can be used to make predictions about your taste in other forms of entertainment (books, movies, games, etc).

Your Friends Have Bad Taste: Fixing Social Music
Music is the most social form of entertainment consumption, but online music has failed to deliver truly social & connected music experiences. Social media updates telling you your aunt listened to Hall and Oates doesn’t deliver on the promise of social music. As access-based, streaming music becomes more mainstream, the current failure & huge potential of social music is becoming clearer. A variety of app developers & online music services are working to create experiences that use music to connect friends & use friends to connect you with new music you’ll love. This talk will uncover how to make social music a reality.

Anyone Can Be a DJ: New Active Listening on Mobile
The mobile phone has become the de facto device for accessing music. According to a recent report, the average person uses their phone as a music player 13 times per day. With over 30 million songs available, any time, any place, listening is shifting from a passive to a personalized and interactive experience for a highly engaged audience.

New data-powered music players on sensor-packed devices are becoming smarter, and could enable listeners to feel more like creators (e.g. Instagram) by dynamically adapting music to its context (e.g. running, commuting, partying, playing). A truly personalized pocket DJ will bring music listening, discovery, and sharing to an entirely new level.

In this talk, we’ll look at how data-enhanced content and smarter mobile players will change the consumer experience into a more active, more connected, and more engaged listening experience.

Human vs. Machine: The Music Curation Formula
Recreating human recommendations in the digital sphere at scale is a problem we’re actively solving across verticals but no one quite has the perfect formula. The vertical where this issue is especially ubiquitous is music. Where we currently stand is solving the integration of human data with machine data and algorithms to generate personalized recommendations that mirrors the nuances of human curation. This formula is the holy grail.

Algorithmic, Curated & Social Music Discover
As the Internet has made millions of tracks available for instant listening, digital music and streaming companies have focused on music recommendations and discovery. Approaches have included using algorithms to present music tailored to listeners’ tastes, using the social graph to find music, and presenting curated & editorial content. This panel will discuss the methods, successes and drawbacks of each of these approaches. We will also discuss the possibility of combining all three approaches to present listeners with a better music discovery experience, with on-the-ground stories of the lessons from building a Discover experience at Spotify.

Beyond the Play Button – The Future of Listening (This is my talk)

Rolling in the Deep (labelled) by Adele

Rolling in the Deep (labelled) by Adele

35 years after the first Sony Walkman shipped, today’s music player still has essentially the same set of controls as that original portable music player. Even though today’s music player might have a million times more music than the cassette player, the interface to all of that music has changed very little.

In this talk we’ll explore new ways that a music listener can interact with their music. First we will explore the near future where your music player knows so much about you, your music taste and your current context that it plays the right music for you all the time. No UI is needed.

Next, we’ll explore a future where music listening is no longer a passive experience. Instead of just pressing the play button and passively listening you will be able to jump in and interact with the music. Make your favorite song last forever, add your favorite drummer to that Adele track or unleash your inner Skrillex and take total control of your favorite track.

5 Years of Music Hack Day
hackday.1.1.1.1Started in 2009 by Dave Haynes and James Darling, Music Hack Day has become the gold standard of music technology events. Having grown to a worldwide, monthly event that has seen over 3500 music hacks created in over 20 cities the event is still going great guns. But, what impact has this event had on the music industry and it’s connection with technology? This talk looks back at the first 5 years of Music Hack Day, from it’s origins to becoming something more important and difficult to control than it’s ‘adhocracy’ beginnings. Have these events really impacted the industry in a positive way or have the last 5 years simply seen a maturing attitude towards technologies place in the music industry? We’ll look at the successes, the hacks that blew people’s minds and what influence so many events with such as passionate audience have had on changing the relationship between music and tech.

The SXSW organizers pay attention when they see a panel that gets lots of votes, so head on over and make your opinion be known.

, ,

Leave a comment

Top SXSWi panels for music discovery and interaction

SXSW 2014 PanelPicker has opened up. I took a tour through the SXSW Interactive talk proposals to highlight the ones that are of most interest to me … typically technical panels about music discover and interaction. Here’s the best of the bunch. Tomorrow, I’ll take a tour through the SXSW Music proposals.

Algorithmic Music Discovery at Spotify
Spotify crunches hundreds of billions of streams to analyze user’s music taste and provide music recommendations for its users. We will discuss how the algorithms work, how they fit in within the products, what the problems are and where we think music discovery is going. The talk will be quite technical with a focus on the concepts and methods, mainly how we use large scale machine learning, but we will also some aspects of music discovery from a user perspective that greatly influenced the design decisions.

Delivering Music Recommendations to Millions
At its heart, presenting personalized data and experiences for users is simple. But transferring, delivering and serving this data at high scale can become quite challenging.
In this session, we will speak about the scalability lessons we learned building Spotify’s Discover system. This system generates terabytes of music recommendations that need to be delivered to tens of millions of users every day. We will focus on the problems encountered when big data needs to be replicated across the globe to power interactive media applications, and share strategies for coping with data at this scale.

Are Machines the DJ’s of Digital Music?
When it comes to music curation, has our technology exceeded our humanity? Fancy algorithms have done wonders for online dating. Can they match you with your new favorite music? Hear music editors from Rhapsody, Google Music, Sony Music and Echonest debate their changing role in curation and music discovery for streaming music services. Whether tuning into the perfect summer dance playlist or easily browsing recommended artists, finding and listening to music is the result of very intentional decisions made by editorial teams and algorithms. Are we sophisticated enough to no longer need the human touch on our music services? Or is that all that separates us from the machines?

Your Friends Have Bad Taste: Fixing Social Music
Music is the most social form of entertainment consumption, but online music has failed to deliver truly social & connected music experiences. Social media updates telling you your aunt listened to Hall and Oates doesn’t deliver on the promise of social music. As access-based, streaming music becomes more mainstream, the current failure & huge potential of social music is becoming clearer. A variety of app developers & online music services are working to create experiences that use music to connect friends & use friends to connect you with new music you’ll love. This talk will uncover how to make social music a reality, including:

  • Musical Identity (MI) – who we are as music fans and how understanding MI is unlocking social music apps
  • If my friend uses Spotify & I use Rdio, can we still be friends? ID resolution & social sharing challenges
  • Discovery issue: finding like-minded fans & relevant expert music curators
  • A look at who’s actually building the future of social music

‘Man vs. Machine’ Is Dead, Long Live Man+Machine
A human on a bicycle is the most efficient land-traveller on planet Earth. Likewise, the most efficient advanced, accurate, helpful, and enjoyable music recommendation systems combine man and machine. This dual-pronged approach puts powerful, data-driven tools in the hands of thinking, feeling experts and end users. In other words, the debate over whether human experts or machines are better at recommending music is over. The answer is “both” — a hybrid between creative technology and creative curators. This panel will provide specific examples of this approach that are already taking place, while looking to the future to see where it’s all headed. 

Are Recommendation Engines Killing Discovery?
Are recommendation engines – like Yelp, Google, and Spotify – ruining the way we experience life? “Absolutely,” says Ned Lampert. The average person looks at their phone 150 times a day, and the majority of content they’re looking at is filtered through a network of friends, likes, and assumptions. Life is becoming prescriptive, opinions are increasingly polarized, and curiosity is being stifled. Recommendation engines leave no room for the unexpected. Craig Key says, “absolutely not.” The Web now has infinitely more data points than we did pre-Google. Not only is there more content, but there’s more data about you and me: our social graph, Netflix history (if you’re brave), our Tweets, and yes, our Spotify activity. Data is the new currency in digital experiences. While content remains king, it will be companies that can use data to sort and display that content in a meaningful way that will win. This session will explore these dueling perspectives.

Genre-Bending: Rise of Digital Eclecticism
The explosion in popularity of streaming music services has started to change the way we listen. But even beyond those always-on devices with unlimited access to millions of songs that we listen to on our morning commutes, while wending our way through paperwork at our desks or on our evening jogs, there is an even a more fundamental change going on. Unlimited access has unhinged musical taste to the point where eclecticism and tastemaking trump identifying with a scene. Listeners are becoming more adventurous, experiencing many more types of music than ever before. And artists are right there with them, blending styles and genres in ways that would be unimaginable even a decade ago. In his role as VP Product-Content Jon Maples has a front row seat to how music-listening behavior has evolved. He’ll share findings from a recent ethnographic study that reveals intimate details on how people live their musical lives.

Put It In Your Mouth: Startups as Tastemakers
Your life has been changed, at least once, by a startup in the last year. Don’t argue; it’s true. Think about it – how do you listen to music? How do you choose what movie to watch? How do you shop, track your fitness or share memories? Whoever you are, whatever your preferences, emerging technology has crept into your life and changed the way you do things on a daily basis. This group of innovators and tastemakers will take a highly entertaining look at how the apps, devices and online services in our lives are enhancing and molding our culture in fundamental ways. Be warned – a dance party might break out and your movie queue might expand exponentially.

And here’s a bit of self promotion … my proposed panel is all about new interfaces for music.

Beyond the Play Button – The Future of Listening
35 years after the first Sony Walkman shipped, today’s music player still has essentially the same set of controls as that original portable music player. Even though today’s music player might have a million times more music than the cassette player, the interface to all of that music has changed very little.  In this talk we’ll explore new ways that a music listener can interact with their music. First we will explore the near future where your music player knows so much about you, your music taste and your current context that it plays the right music for you all the time. No UI is needed.  Next, we’ll explore a future where music listening is no longer a passive experience. Instead of just pressing the play button and passively listening you will be able to jump in and interact with the music. Make your favorite song last forever, add your favorite drummer to that Adele track or unleash your inner Skrillex and take total control of your favorite track.

The SXSW organizers pay attention when they see a panel that gets lots of votes, so head on over and make your opinion be known.

,

Leave a comment

Beyond the Play Button – My SXSW Proposal

It is SXSW Panel Picker season.   I’ve submitted a talk to both SXSW Interactive and SXSW Music.  The talk is called ‘Beyond the Play Button – the Future of Listening’ – the goal of the talk is to explore new interfaces for music listening, discovery and interaction.  I’ll show a bunch of my hacks and some nifty stuff I’ve been building in the lab. Here’s the illustrated abstract:

35 years after the first Sony Walkman shipped, today’s music player still has essentially the same set of controls as that original portable music player. Even though today’s music player might have a million times more music than the cassette player, the interface to all of that music has changed very little.

 

In this talk we’ll explore new ways that a music listener can interact with their music. First we will explore the near future where your music player knows so much about you, your music taste and your current context that it plays the right music for you all the time. No UI is needed.

Next, we’ll explore a future where music listening is no longer a passive experience. Instead of just pressing the play button and passively listening you will be able to jump in and interact with the music. Make your favorite song last forever, add your favorite drummer to that Adele track or unleash your inner Skrillex and take total control of your favorite track.

If this talk looks interesting to you (and if you are a regular reader of my blog, it probably is), and you are going to SXSW, consider voting for the talk via the SXSW Panel Picker:

Leave a comment

Slides for my Data Mining Music talk

I recently gave a talk on Data Mining Music at SXSW.  It was a standing room only session, with an enthusiastic audience that asked great questions.  It was a really fun time for me.   I’ve posted the slides to Slideshare, but be warned that there are no speaker notes so it may not always be clear what any particular slide is about.  There was lots of music in the talk, but unfortunately, it is not in the Slideshare PDF. The links below should flesh out most of the details and have some audio examples.

Data Mining Music

Related Links:

Thanks to everyone who attended.

, ,

Leave a comment

Follow

Get every new post delivered to your Inbox.

Join 1,185 other followers