Archive for category events

Upcoming Music Hack Days – Chicago, Bologna and NYC

hackday.1.1.1.1

Photo by Thomas Bonte

Fall is traditional Music Hack Day season, and 2013 is shaping up to be the strongest yet. Three Music Hack Days have just been announced:

  • Chicago – September 21st and 22nd – this will be the first ever Music Hack Day in Chicago.
  • Bologna – October 5th and 6th – in collaboration with roBOt Festival 2013. The first Music Hack Day in Italy.
  • New York – October 18th and 19th – being held in Spotify’s nifty new offices.

There will no doubt be more hack days before the end of the year including the traditional Boston and London events.  You can check out the full schedule and sign up to be notified whenever at a new Music Hack Day is announced at MusicHackDay.org.

Music Hack Day is an international 24-hour event where programmers, designers and artists come together to conceptualize, build and demo the future of music. Software, hardware, mobile, web, instruments, art – anything goes as long as it’s music related.

, , ,

Leave a comment

Top SXSW Music Panels for music exploration, discovery and interaction

SXSW 2014 PanelPicker has opened up. I took a tour through the SXSW Music panel proposals to highlight the ones that are of most interest to me … typically technical panels about music discovery and interaction. Here’s the best of the bunch. You’ll notice a number of Echo Nest oriented proposals. I’m really not shilling, I genuinely think these are really interesting talks (well, maybe I’m shilling for my talk).

 I’ve previously highlighted the best the bunch for SXSW Interactive

A Genre Map for Discovering the World of Music
Screenshot_5_22_13_11_01_AMAll the music ever made (approximately) is a click or two away. Your favorite music in the world is probably something you’ve never even heard of yet. But which click leads to it?

Most music “discovery” tools are only designed to discover the most familiar thing you don’t already know. Do you like the Dave Matthews Band? You might like O.A.R.! Want to know what your friends are listening to? They’re listening to Daft Punk, because they don’t know any more than you. Want to know what’s hot? It’s yet another Imagine Dragons song that actually came out in 2012. What we NEED are tools for discovery through exploration, not dictation.

This talk will provide a manic music-discovery demonstration-expedition, showcasing how discovery through exploration (The Echo Nest Discovery list & the genre mapping experiment, Every Noise at Once) in the new streaming world is not an opportunity to pay different people to dictate your taste, but rather a journey, unearthing new music JUST FOR YOU.

The Predictive Power of Music
Music taste is extremely personal and an important part of defining and communicating who we are.

Musical Identity, understanding who you are as a music fan and what that says about you, has always been a powerful indicator of other things about you. Broadcast radio’s formats (Urban, Hot A/C, Pop, and so on) are based on the premise that a certain type of music attracts a certain type of person. However, the broadcast version of Musical Identity is a blunt instrument, grouping millions of people into about 12 audience segments. Now that music has become a two-way conversation online, Musical Identity can become considerably more precise, powerful, and predictive.

In this talk, we’ll look at why music is one of the strongest predictors and how music preference can be used to make predictions about your taste in other forms of entertainment (books, movies, games, etc).

Your Friends Have Bad Taste: Fixing Social Music
Music is the most social form of entertainment consumption, but online music has failed to deliver truly social & connected music experiences. Social media updates telling you your aunt listened to Hall and Oates doesn’t deliver on the promise of social music. As access-based, streaming music becomes more mainstream, the current failure & huge potential of social music is becoming clearer. A variety of app developers & online music services are working to create experiences that use music to connect friends & use friends to connect you with new music you’ll love. This talk will uncover how to make social music a reality.

Anyone Can Be a DJ: New Active Listening on Mobile
The mobile phone has become the de facto device for accessing music. According to a recent report, the average person uses their phone as a music player 13 times per day. With over 30 million songs available, any time, any place, listening is shifting from a passive to a personalized and interactive experience for a highly engaged audience.

New data-powered music players on sensor-packed devices are becoming smarter, and could enable listeners to feel more like creators (e.g. Instagram) by dynamically adapting music to its context (e.g. running, commuting, partying, playing). A truly personalized pocket DJ will bring music listening, discovery, and sharing to an entirely new level.

In this talk, we’ll look at how data-enhanced content and smarter mobile players will change the consumer experience into a more active, more connected, and more engaged listening experience.

Human vs. Machine: The Music Curation Formula
Recreating human recommendations in the digital sphere at scale is a problem we’re actively solving across verticals but no one quite has the perfect formula. The vertical where this issue is especially ubiquitous is music. Where we currently stand is solving the integration of human data with machine data and algorithms to generate personalized recommendations that mirrors the nuances of human curation. This formula is the holy grail.

Algorithmic, Curated & Social Music Discover
As the Internet has made millions of tracks available for instant listening, digital music and streaming companies have focused on music recommendations and discovery. Approaches have included using algorithms to present music tailored to listeners’ tastes, using the social graph to find music, and presenting curated & editorial content. This panel will discuss the methods, successes and drawbacks of each of these approaches. We will also discuss the possibility of combining all three approaches to present listeners with a better music discovery experience, with on-the-ground stories of the lessons from building a Discover experience at Spotify.

Beyond the Play Button – The Future of Listening (This is my talk)

Rolling in the Deep (labelled) by Adele

Rolling in the Deep (labelled) by Adele

35 years after the first Sony Walkman shipped, today’s music player still has essentially the same set of controls as that original portable music player. Even though today’s music player might have a million times more music than the cassette player, the interface to all of that music has changed very little.

In this talk we’ll explore new ways that a music listener can interact with their music. First we will explore the near future where your music player knows so much about you, your music taste and your current context that it plays the right music for you all the time. No UI is needed.

Next, we’ll explore a future where music listening is no longer a passive experience. Instead of just pressing the play button and passively listening you will be able to jump in and interact with the music. Make your favorite song last forever, add your favorite drummer to that Adele track or unleash your inner Skrillex and take total control of your favorite track.

5 Years of Music Hack Day
hackday.1.1.1.1Started in 2009 by Dave Haynes and James Darling, Music Hack Day has become the gold standard of music technology events. Having grown to a worldwide, monthly event that has seen over 3500 music hacks created in over 20 cities the event is still going great guns. But, what impact has this event had on the music industry and it’s connection with technology? This talk looks back at the first 5 years of Music Hack Day, from it’s origins to becoming something more important and difficult to control than it’s ‘adhocracy’ beginnings. Have these events really impacted the industry in a positive way or have the last 5 years simply seen a maturing attitude towards technologies place in the music industry? We’ll look at the successes, the hacks that blew people’s minds and what influence so many events with such as passionate audience have had on changing the relationship between music and tech.

The SXSW organizers pay attention when they see a panel that gets lots of votes, so head on over and make your opinion be known.

, ,

Leave a comment

Top SXSWi panels for music discovery and interaction

SXSW 2014 PanelPicker has opened up. I took a tour through the SXSW Interactive talk proposals to highlight the ones that are of most interest to me … typically technical panels about music discover and interaction. Here’s the best of the bunch. Tomorrow, I’ll take a tour through the SXSW Music proposals.

Algorithmic Music Discovery at Spotify
Spotify crunches hundreds of billions of streams to analyze user’s music taste and provide music recommendations for its users. We will discuss how the algorithms work, how they fit in within the products, what the problems are and where we think music discovery is going. The talk will be quite technical with a focus on the concepts and methods, mainly how we use large scale machine learning, but we will also some aspects of music discovery from a user perspective that greatly influenced the design decisions.

Delivering Music Recommendations to Millions
At its heart, presenting personalized data and experiences for users is simple. But transferring, delivering and serving this data at high scale can become quite challenging.
In this session, we will speak about the scalability lessons we learned building Spotify’s Discover system. This system generates terabytes of music recommendations that need to be delivered to tens of millions of users every day. We will focus on the problems encountered when big data needs to be replicated across the globe to power interactive media applications, and share strategies for coping with data at this scale.

Are Machines the DJ’s of Digital Music?
When it comes to music curation, has our technology exceeded our humanity? Fancy algorithms have done wonders for online dating. Can they match you with your new favorite music? Hear music editors from Rhapsody, Google Music, Sony Music and Echonest debate their changing role in curation and music discovery for streaming music services. Whether tuning into the perfect summer dance playlist or easily browsing recommended artists, finding and listening to music is the result of very intentional decisions made by editorial teams and algorithms. Are we sophisticated enough to no longer need the human touch on our music services? Or is that all that separates us from the machines?

Your Friends Have Bad Taste: Fixing Social Music
Music is the most social form of entertainment consumption, but online music has failed to deliver truly social & connected music experiences. Social media updates telling you your aunt listened to Hall and Oates doesn’t deliver on the promise of social music. As access-based, streaming music becomes more mainstream, the current failure & huge potential of social music is becoming clearer. A variety of app developers & online music services are working to create experiences that use music to connect friends & use friends to connect you with new music you’ll love. This talk will uncover how to make social music a reality, including:

  • Musical Identity (MI) – who we are as music fans and how understanding MI is unlocking social music apps
  • If my friend uses Spotify & I use Rdio, can we still be friends? ID resolution & social sharing challenges
  • Discovery issue: finding like-minded fans & relevant expert music curators
  • A look at who’s actually building the future of social music

‘Man vs. Machine’ Is Dead, Long Live Man+Machine
A human on a bicycle is the most efficient land-traveller on planet Earth. Likewise, the most efficient advanced, accurate, helpful, and enjoyable music recommendation systems combine man and machine. This dual-pronged approach puts powerful, data-driven tools in the hands of thinking, feeling experts and end users. In other words, the debate over whether human experts or machines are better at recommending music is over. The answer is “both” — a hybrid between creative technology and creative curators. This panel will provide specific examples of this approach that are already taking place, while looking to the future to see where it’s all headed. 

Are Recommendation Engines Killing Discovery?
Are recommendation engines – like Yelp, Google, and Spotify – ruining the way we experience life? “Absolutely,” says Ned Lampert. The average person looks at their phone 150 times a day, and the majority of content they’re looking at is filtered through a network of friends, likes, and assumptions. Life is becoming prescriptive, opinions are increasingly polarized, and curiosity is being stifled. Recommendation engines leave no room for the unexpected. Craig Key says, “absolutely not.” The Web now has infinitely more data points than we did pre-Google. Not only is there more content, but there’s more data about you and me: our social graph, Netflix history (if you’re brave), our Tweets, and yes, our Spotify activity. Data is the new currency in digital experiences. While content remains king, it will be companies that can use data to sort and display that content in a meaningful way that will win. This session will explore these dueling perspectives.

Genre-Bending: Rise of Digital Eclecticism
The explosion in popularity of streaming music services has started to change the way we listen. But even beyond those always-on devices with unlimited access to millions of songs that we listen to on our morning commutes, while wending our way through paperwork at our desks or on our evening jogs, there is an even a more fundamental change going on. Unlimited access has unhinged musical taste to the point where eclecticism and tastemaking trump identifying with a scene. Listeners are becoming more adventurous, experiencing many more types of music than ever before. And artists are right there with them, blending styles and genres in ways that would be unimaginable even a decade ago. In his role as VP Product-Content Jon Maples has a front row seat to how music-listening behavior has evolved. He’ll share findings from a recent ethnographic study that reveals intimate details on how people live their musical lives.

Put It In Your Mouth: Startups as Tastemakers
Your life has been changed, at least once, by a startup in the last year. Don’t argue; it’s true. Think about it – how do you listen to music? How do you choose what movie to watch? How do you shop, track your fitness or share memories? Whoever you are, whatever your preferences, emerging technology has crept into your life and changed the way you do things on a daily basis. This group of innovators and tastemakers will take a highly entertaining look at how the apps, devices and online services in our lives are enhancing and molding our culture in fundamental ways. Be warned – a dance party might break out and your movie queue might expand exponentially.

And here’s a bit of self promotion … my proposed panel is all about new interfaces for music.

Beyond the Play Button – The Future of Listening
35 years after the first Sony Walkman shipped, today’s music player still has essentially the same set of controls as that original portable music player. Even though today’s music player might have a million times more music than the cassette player, the interface to all of that music has changed very little.  In this talk we’ll explore new ways that a music listener can interact with their music. First we will explore the near future where your music player knows so much about you, your music taste and your current context that it plays the right music for you all the time. No UI is needed.  Next, we’ll explore a future where music listening is no longer a passive experience. Instead of just pressing the play button and passively listening you will be able to jump in and interact with the music. Make your favorite song last forever, add your favorite drummer to that Adele track or unleash your inner Skrillex and take total control of your favorite track.

The SXSW organizers pay attention when they see a panel that gets lots of votes, so head on over and make your opinion be known.

,

Leave a comment

Beyond the Play Button – My SXSW Proposal

It is SXSW Panel Picker season.   I’ve submitted a talk to both SXSW Interactive and SXSW Music.  The talk is called ‘Beyond the Play Button – the Future of Listening’ – the goal of the talk is to explore new interfaces for music listening, discovery and interaction.  I’ll show a bunch of my hacks and some nifty stuff I’ve been building in the lab. Here’s the illustrated abstract:

35 years after the first Sony Walkman shipped, today’s music player still has essentially the same set of controls as that original portable music player. Even though today’s music player might have a million times more music than the cassette player, the interface to all of that music has changed very little.

 

In this talk we’ll explore new ways that a music listener can interact with their music. First we will explore the near future where your music player knows so much about you, your music taste and your current context that it plays the right music for you all the time. No UI is needed.

Next, we’ll explore a future where music listening is no longer a passive experience. Instead of just pressing the play button and passively listening you will be able to jump in and interact with the music. Make your favorite song last forever, add your favorite drummer to that Adele track or unleash your inner Skrillex and take total control of your favorite track.

If this talk looks interesting to you (and if you are a regular reader of my blog, it probably is), and you are going to SXSW, consider voting for the talk via the SXSW Panel Picker:

Leave a comment

Rock Steady – My Music Ed Hack

This weekend I’m at The Music Education Hack in New York City where educators and technologists are working together to transform music education in New York City.  My hack, Rock Steady,  is a drummer training app for the iPhone.  You use the app to measure how well you can keep a steady beat.  Here’s how it works:

photo (1)

First you add songs from your iTunes collection. The app will then use The Echo Nest to analyze the song and map out all of the beats. Once the song is ready you enter Rock Steady training mode: The app will show you the current tempo of the song. Your goal then is to match the tempo by using your phone as a drumstick and tapping out the beat.  You are scored based upon how well you match the tempo.  There are three modes: matching mode  – in this easy-peesy mode you listen to the song and match the tempo.  A bit harder is silent mode –  you listen to the song for a few seconds and then try to maintain the tempo on your own. Finally there’s bonzo mode – here the music is playing, but instead of you matching the music, the music matches you. If you speed up, the music speeds up, if you slow down, the music slows down.  This is the trickiest mode – you have to keep a steady beat and not be fooled by the band that is following you.

This is my first iOS hack. I got to use lots of new stuff, such as Core Motion to detect the beats. I stole lots of code from the iOS version of the Infinite Jukebox (all the track upload  and analysis stuff).  It was a fun hack to build. If anyone thinks it is interesting I may try to finish it and put it in the app store.

 

Here’s a video:

[youtube http://www.youtube.com/watch?v=UsJ7RBkRAag]

 

, ,

Leave a comment

Two music hackathons in NYC next weekend ….

Next weekend, (starting friday, June 28th) there are two music-related hackathons in NYC.  First up, there’s The Hamr

Hacking Audio and Music Research (HAMR)

Organized by Colin Raffel is  HAMR: Hacking Music and Audio Research.  This hackathon is focused on music research with a goal of testing out new ideas rather than making a finished product. The focus of HAMR is on the development of new techniques for analyzing, processing, and synthesizing audio and music signals.  HAMR will be modeled after a traditional hack day in that it will involve a weekend of fast-paced work with an emphasis on trying something new rather than finishing something polished.  However, this event will deviate from the typical hack day in its focus on research (rather than commercial) applications.  In addition to HAMRing out work, the event will include presentations, discussions, and informal workshops.  Registration is free and researchers from any stage in their career are encouraged to participate.  Read more about Hamr

The other hacking event is Music Education Hack

Music Education Hack

music-ed-hackThe goal of the Music Education Hack is to explore how technology and help transform music education in NYC schools.  Hackers will have 24 hours to ideate, collaborate and innovate, before presenting their work to a panel of esteemed judges for a grand prize of $5,000. The Hacker teams will have access to New York City teachers as part of the  creation process as they focus on building products that incorporate music and technology into the education space.  For more info visit the Music Education Hack registration page.

 

Leave a comment

The Tufts Hackathon

Last weekend, Barbara Duckworth and Jennie Lamere teamed up at the Tufts Hackathon to build a music hack. Here’s Barbara’s report from the hackathon:

Screenshot_2_26_13_6_24_AM

Jen Lamere and Barbara Duckworth presenting:

Cinemusic – created at Tufts Hackathon

For our second hack day, Jen Lamere and I were wildly successful. Going into the Tufts hackathon, we knew that we wanted to create a hack involving music, but we didn’t want the hassle of having to make hardware to go along with it, like in our last hack, HighFive Hero.

CineMusicSmallAs we were walking to the building in which the hackathon was held, we decided on making a program that would suggest a movie based on its soundtrack. The user would tell us their favorite artists, and we would find a movie soundtrack that contained similar music, the idea being that if you like the soundtrack, the movie would also be of your tastes. So, lets say you have an unnatural love for Miley Cyrus. Type that in, and our music-to-movie program would tell you to watch Another Cinderella Story, with Selena Gomez on the soundtrack. With Selena also being a Disney Channel star and of similar singing caliber, the suggestion makes sense.

Barbie and Jen hacking away (photo by  Ming Chow)

Barbie and Jen hacking away (photo by Ming Chow)

We used The Echo Nest API to search for similar artists, and with the help of Paul Lamere, utilized Spotify’s fantastic tagging system to compile a huge data file of artists and soundtracks, which we then sorted through. We also added a cool last-minute feature using the Spotify API, which would start playing the soundtrack right as the movie suggestion was given. Jen and I hope to iron out any bugs that are currently in our program, and turn it into a web app.

CineMusic_MileyCyrus

Screenshot_2_26_13_6_17_AM

Our (if I do say so myself) pretty awesome hack, combined with our amateur status, won us the rookie award at Tufts Hackathon! Jen and I will both be proudly wearing our new “GitHub swag” and we will hopefully find a way to put the AWS credits to good use. Thank you to everyone at Tufts, for organizing such a fantastic event!

tufts

Barbara and Jennie reviewing their swag options

,

Leave a comment