ISMIR Session – the web

The first session at ISMIR today is on the Web.  4 really interesting sets of papers:

Songle – an active music listening experience

Mastaka Goto presented Songle at ISMIR this morning. Songle is a web site for active music listening and content-based music browsing.  Songle takes many of the MIR techniques that researchers have been working on for years and makes it available to non-MIR experts to help them understand music better.  You can also use Songle to modify the music. You can interactively change the beat and melody, copy and paste sections.   Your edits can be shared with others.  Masataka hopes that Songle can serve as a showcase of MIR and music-understanding of technologies and will serve as a platform for other researchers as well.  There’s a lot of really powerful music technology behind Songle.  I look forward to trying it out.   Paper.

Improving Perceptual Tempo estimation with Crowd-Source Annotations

Mark Levy from Last.fm describes the Last.fm experiment to crowd source the gathering of tempo information (fast, slow and BPM) that can be used to help eliminate tempo ambiguity in machine-estimated tempo (typically known as the octave error).  They ran their test over 4K songs from a number of genres.  So far they’ve had 27K listeners apply 200k labels and bpm estimates. (woah!). Last.fm is releasing this dataset.  Very interesting work. Paper

Investigating the similarity space of music artists on the micro-blogosphere

Markus Schedl analyzed 6 million tweets by searching tweets for artist names and conducted a number of experiments to see if artist similarity could be determined based upon these tweets.  (They used the Comirva framework to conduct the experiments).  Findings: document based techniques work best (cosine similarity, while not always yielding the best result yielded the most stable results).  Unsurprisingly adding the term ‘music’ to the twitter search helps a lot (Reducing the CAKE, Spoon and KISS problems).  Surprising result is that using tweets for deriving similarity works better than using larger documents derived from web search. Markus suggest that this may be due to the higher information content in the much shorter tweets.  Datasets are available. Paper

Music Influence Network Analysis and Rank of Sample-based Music

Nick Bryan from Stanford – trying to understand how songs/artists and genres interact with the sampled-base music (remixes etc).  Using data from Whosampled.com – (42K user-generated  sample info sets).  From this data they created an directed graph and did some network analysis on the graph  (centrality / influence) – Hypothesized  that there’s a power law distribution of connectivity (typical small-worlds, scale-free distribution with a rich-gets-richer effect).  They confirmed this hypothesis.   Use Katz Influence to help understand sample-chains.  From the song-sample graph, artist sample graphs (who sampled whom) and genre sample graphs (which genres sample from other genres) were derived.  With all these graphs, Nick was then able to understand which songs and artists are the most influential (James Brown is king of sampling), surprisingly, the AMEN break is only the second most influential sample.  Interesting and fun work.  Paper

 

 

 

 

 

, ,

%d bloggers like this: