Posts Tagged ismir

ISMIR 2009 – The Future of MIR

This year ISMIR concludes with the 1st Workshop on the Future of MIR.  The workshop is organized by students who are indeed the future of MIR.


09:00-10:00 Special Session: 1st Workshop on the Future of MIR

The PDF files of the papers in this special session are available at the f(MIR) official website. Welcome and Introduction to the f(MIR) workshop Thierry Bertin-Mahieux

MIR, where we are, where we are going

Session Chair: Amélie Anglade Program Chair of f(MIR)

Meaningful Music Retrieval

Frans Wiering – [pdf]

Wiering-fmir.pdf (page 2 of 3)


  • Some unfortunate tendencies:  anatomical view of music – a dead body that we do autopsies, time is the loser  Traditional production-oriented/
  • Measure of similarity: relevance, surprise
  • Few interesting applications for end-users
  • bad fit to present-day musicological themes
  • We are in the world of ‘pure applied research’ – no truth interdisciplinary between music domain knowledge and computer science.
  • Music is meaningful (and the underlying personal motivation of most MIR researchers).
  • Meaning in musicology – traditionally a taboo suject
  • Subjectivity:  an indivds. disposition to engage in social and cultural interactions
  • Meaning generation process – we have a long-term memory for  music –
  • Can musical meaning provide the ‘big story line’ for MIR?

The Discipline Formerly Known As MIR

Perfecto Herrera, Joan Serrà, Cyril Laurier, Enric Guaus, Emilia Gómez and Xavier Serra

Intro: Our exploration is not a science-fiction essay. We do not try to imagine how music will be conceptualized, experienced and mediated by our yet-to-come research, technological    achievements  and  music gizmos. Alternatively, we reflect on how the discipline should evolve to become consolidated as such, in order it may get an effective future instead of becoming, after a promising start, just a “would-be” discipline.Our vision addresses different aspects: the discipline’s object of study, the employed methodologies, social and cultural impacts (which are out of this long abstract because of space restrictions), and we finish with some (maybe) disturbing issues that could be taken as partial and biased guidelines for future research.

Herrera-fmir.pdf (page 2 of 3)

Notes: One motivation for advancing MIR – more banquets!

  • MIR is no more about retrieval than computer science is about computers
  • Music Information Retrieval – it’s too narrow
  • Music Information or Information about Music?
  • Interested in the interaction with music information
  • We should be asking more profound questions
    • music
    • content tresasures in short musical exceprts, tracks performances etc.
    • context
  • music understanding systems
  • Most metadata will be generated in the creation / production phase (hmm.. don’t agree necessarily, all the good metadata (tags, who likes what) is based on context and use which is post-hoc)
  • Instead of automatic analysis – build systems to help humans help humans
  • Music like water? or Music as dog!!! – a friend – companion –
  • Personalization, Findability
  • Music turing test

Good, provocative talk

Oral Session 2: Potential future MIR applications

Session Chair: Jason Hockman (McGill University), Program Chair of f(MIR)

Machine Listening to Percussion: Current Approaches and Future Directions – [pdf]

Michael Ward

Abstract: approaches have been taken to detect and classify percussive events within music signals for a variety of purposes with differing and converging aims. In this paper an overview of those technologies is presented and a discussion of the issues still to overcome and future possibilities in the field are presented. Finally a system capable of monitoring a student drummer is envisaged which draws together current approaches and future work in the field.


  • Challengs: Onset detection of isolated drum strokes
  • Onset detection and classification of overlapping drum sounds
  • Onset detection and classification in the presence of other instruments
  • Variability in Percussive sounds .  Dozens of criteria effect the sounds produced (strike velocity, angle, position etc.)
  • Future Research Areas
    • Extension of recognition to include the wide variety of strokes.  (open hh, half-open hh, hh foot splash etc)

MIR When All Recordings Are Gone: Recommending Live Music in Real-Time –  [pdf]

Marco Lüthy and Jean-Julien Aucouturier

Recommending live and short lived events. Bandsintown, Songkick, gigulate … pay attention to this paper.

Aucouturier-fmir.pdf (page 3 of 3)


  • Recommendation for live music in real-time
  • Coldplay -> free album when you get a  ticket to a coldplay concert – give away the music
  • NIN ->  USB keys in the toilet – which had strange recording on the file – strange sounds – an FFT of the sounds showed phone number and GPS coordinates – turned into a treasure hunt to a NIN nails concert.
  • Komuso Tokugawa – an avatar for a musiciaon in second life.  Plays in second life, twitters concert announcements (playing wake for Les Paul in 3 minutes)
  • ‘How do we get there in time?’
  • JJ walked through how to  implement a recommender system in second life
  • Implicit preference inferred from how long your avatar listens to a concert (Nicole Yankelovich at Sun Labs should look at this stuff)
  • Great talk by JJ – full of energy – neat ideas. Good work.


Poster Session

  • Global Access to Ethnic Music: The Next Big Challenge?
    Olmo Cornelis, Dirk Moelants and Marc Leman
  • The Future of Music IR: How Do You Know When a Problem Is Solved?
    Eric Nichols and Donald Byrd

, , , , ,

Leave a comment

ISMIR 2009 – The Industry Panel

On Thursday I participated in the ISMIR industrial panel.  8 members of industry talked about the issues and challenges that they face in industry.  I had a good time on the panel, the panelists were all on target and very thoughtful, and there were great questions from the audience.  I’m happy too that the IRC channel offered a place for those to vent without the session turning into SXSW-style riot.

Justin Donaldson kept good notes on the panel and has posted them on his blog: ISMIR 2009 Industry Panel

, ,

Leave a comment

ISMIR Keynote – Wind instrument-playing humanoid robots

What’s not to love!?!? Robots and Music! This was a great talk.

Wind instrument-playing humanoid robots
Atsuo Takanishi

Some history of robots:

Wabot-2 – early music playing robot

Wabian-2 – walking robots

Emotional Robots

Kobian: Emotional humanoid robot

Voice Producing Robots

Music Performance Robots




ISMIR – MIREX Panel Discussion

Stephen Downie presents the MIREX session

Statistics for 2009:

  • 26 tasks
  • 138 participants
  • 289 evaluation runs

Results are now published:

This year, new datasets:

  • Mazurkas
  • MIR 1K
  • Back Chorales
  • Chord and Segmentation datasets
  • Mood dataset
  • Tag-a-Tune

Evalutron 6K – Human evaluations – this year, 50 graders / 7500 possible grading events.

What’s Next?

Issues about MIREX

  • Rein in the parameter explosion
  • Not rigorously tested algorithms
  • Hard-coded parameters, path-separators, etc
  • Poorly specified data inputs/outputs
  • Dynamically linked libraries
  • Windows submissions
  • Pre-compiled Matlab/MEX Submissions
  • The ‘graduation’ problem – Andreas and Cameron will be gone in summer.

Long discussion with people opining about tests, data.  Ben Fields had a particularly good point about trying to make MIREX  better reflect real systems that draw upon web resources.



Leave a comment

ISMIR Oral Session 5 – Tags

Oral Session 5 – Tags

Session Chair: Paul Lamere

I’m the session chair for this session, so I can’t keep notes. So instead I offer the abstracts.


Fei Wang, Xin Wang, Bo Shao, Tao Li    Mitsunori Ogihara

Abstract: Automatic music style classification is an important, but challenging problem in music information retrieval. It has a number of applications, such as indexing of and search- ing in musical databases. Traditional music style classifi- cation approaches usually assume that each piece of music has a unique style and they make use of the music con- tents to construct a classifier for classifying each piece into its unique style. However, in reality, a piece may match more than one, even several different styles. Also, in this modern Web 2.0 era, it is easy to get a hold of additional, indirect information (e.g., music tags) about music. This paper proposes a multi-label music style classification ap- proach, called Hypergraph integrated Support Vector Ma- chine (HiSVM), which can integrate both music contents and music tags for automatic music style classification. Experimental results based on a real world data set are pre- sented to demonstrate the effectiveness of the method.

ismir2009-proceedings.pdf (page 372 of 775)


Matthew D. Hoffman, David M. Blei, Perry R. Cook

ABSTRACT Many songs in large music databases are not labeled with semantic tags that could help users sort out the songs they want to listen to from those they do not. If the words that apply to a song can be predicted from audio, then those predictions can be used both to automatically annotate a song with tags, allowing users to get a sense of what qualities characterize a song at a glance. Automatic tag prediction can also drive retrieval by allowing users to search for the songs most strongly characterized by a particular word. We present a probabilistic model that learns to predict the probability that a word applies to a song from audio. Our model is simple to implement, fast to train, predicts tags for new songs quickly, and achieves state-of-the-art performance on annotation and retrieval tasks.

ismir2009-proceedings.pdf (page 381 of 775)


Joon Hee Kim, Brian Tomasik, Douglas Turnbull

ABSTRACT Tags are useful text-based labels that encode semantic information about music (instrumentation, genres, emotions, geographic origins). While there are a number of ways to collect and generate tags, there is generally a data sparsity problem in which very few songs and artists have been accurately annotated with a sufficiently large set of relevant tags. We explore the idea of tag propagation to help alleviate the data sparsity problem. Tag propagation, originally proposed by Sordo et al., involves annotating a novel artist with tags that have been frequently associated with other similar artists. In this paper, we explore four approaches for computing artists similarity based on dif- ferent sources of music information (user preference data, social tags, web documents, and audio content). We com- pare these approaches in terms of their ability to accurately propagate three different types of tags (genres, acoustic de- scriptors, social tags). We find that the approach based on collaborative filtering performs best. This is somewhat surprising considering that it is the only approach that is not explicitly based on notions of semantic similarity. We also find that tag propagation based on content-based mu- sic analysis results in relatively poor performance.

ismir2009-proceedings.pdf (page 387 of 775)MUSIC MOOD REPRESENTATIONS FROM SOCIAL TAGS

Cyril Laurier, Mohamed Sordo, Joan Serra, Perfecto Herrera

ABSTRACT This paper presents findings about mood representations. We aim to analyze how do people tag music by mood, to create representations based on this data and to study the agreement between experts and a large community. For this purpose, we create a semantic mood space from tags using Latent Semantic Analysis. With an unsuper- vised clustering approach, we derive from this space an ideal categorical representation. We compare our commu- nity based semantic space with expert representations from Hevner and the clusters from the MIREX Audio Mood Classification task. Using dimensional reduction with a Self-Organizing Map, we obtain a 2D representation that we compare with the dimensional model from Russell. We present as well a tree diagram of the mood tags obtained with a hierarchical clustering approach. All these results show a consistency between the community and the ex- perts as well as some limitations of current expert models. This study demonstrates a particular relevancy of the basic emotions model with four mood clusters that can be sum- marized as: happy, sad, angry and tender. This outcome can help to create better ground truth and to provide more realistic mood classification algorithms. Furthermore, this method can be applied to other types of representations to build better computational models.


Edith Law, Kris West, Michael Mandel, Mert Bay, J. Stephen Downie

Abstract Search by keyword is an extremely popular method for retrieving music. To support this, novel algorithms that automatically tag music are being developed. The conventional way to evaluate audio tagging algorithms is to com- pute measures of agreement between the output and the ground truth set. In this work, we introduce a new method for evaluating audio tagging algorithms on a large scale by collecting set-level judgments from players of a human computation game called TagATune. We present the de- sign and preliminary results of an experiment comparing five algorithms using this new evaluation metric, and con- trast the results with those obtained by applying several conventional agreement-based evaluation metrics.

ismir2009-proceedings.pdf (page 400 of 775)


Leave a comment

ISMIR Poster Madness #3


Leave a comment

ISMIR Oral Session 4 – Music Recommendation and playlisting

Music Recommendation and playlisting

Session Chair:  Douglas Turnbull


by Kazuyoshi Yoshii and Masataka Goto

  • Unexpected encounters with unknown songs is increasingly important.
  • Want accurate and diversifed recommendations
  • Use a probabilistic approach suitable to deal with uncertainty of rating histories
  • Compares CF vs. content-based and his Hybrid filtering system

Approach: Use PLSI to create a  3-way aspect model: user-song-feature – the unobservable category regading genre, tempo, vocal age, popularity etc.  – pLSI typical patterns are given by relationships between users, songs and a limited number of topics.  Some drawbacks: PLSI needs discrete features, multinomial distributions are assumed.  To deal with this formulate continuous pLSI, use gaussian mixture models and can assume continuous distributions.  A drawback of continuous pLSI – local minimum problem and the hub problem.  Popular songs are recommended often because of the hubs.  How to deal with this: Gaussian parameter tying – this reduces the number of free parameters. Only the mixture weights vary. Artist-based song clustering: Train an artist-based model and update it to a song-based model by an incremental training method (from 2007).

Here’s the system model:

ismir2009-proceedings.pdf (page 347 of 775)

Evaluation: They found that using the  techniques to adjust model complexity significantly improved the accuracy of recommendations and that the second technique could also reduce hubness.


François Maillet, Douglas Eck, Guillaume Desjardins, Paul Lamere

This paper presents an approach to generating steerable playlists. They first demonstrate a method for learning song transition probabilities from audio features extracted from songs played in professional radio station playlists and then show that by using this learnt similarity function as a prior, they are able to generate steerable playlists by choosing the next song to play not simply based on that prior, but on a tag cloud that the user is able to manipulate to express the high-level characteristics of the music he wishes to listen to.

  • Learn a similarity space from commercial radion staion playlists
  • generate steerable playlists

Francois defines a playlist. Data sources: Radio Paradise and’s API.  7million tracks,

Problem:  They had positive examples but didn’t have an explicit set of negative examples.   Chose them at random.

Learning the song space: Trained a binary classifier to determine if a song sequence is real.

Features: Timbre, Rhythmic/dancability, loudness


ismir2009-proceedings.pdf (page 358 of 775)


Klaas Bosteels, Elias Pampalk, Etienne Kerr

Abstract: In this paper, we analyse and evaluate several heuristics for adding songs to a dynamically generated playlist. We explain how radio logs can be used for evaluating such heuristics, and show that formalizing the heuristics using fuzzy set theory simplifies the analysis. More concretely, we verify previous results by means of a large scale evaluation based on 1.26 million listening patterns extracted from radio logs, and explain why some heuristics perform better than others by analysing their formal definitions and conducting additional evaluations.


  • Dynamic playlist generation
  • Formalization using fuzzy sets. Sets of accepted songs and sets of rejected songs
  • Why last two songs not accepted? To make sure the listener is still paying attention?
  • Interesting observation that the thing that matters most is membership in the fuzzy set of rejected songs. Why? Inconsistent skipping behavior.

ismir2009-proceedings.pdf (page 361 of 775)


Luke Barrington, Reid Oda, Gert Lanckriet

Abstract: Genius is a popular commercial music recommender sys- tem that is based on collaborative filtering of huge amounts of user data. To understand the aspects of music similarity that collaborative filtering can capture, we compare Genius to two canonical music recommender systems: one based purely on artist similarity, the other purely on similarity of acoustic content. We evaluate this comparison with a user study of 185 subjects. Overall, Genius produces the best recommendations. We demonstrate that collaborative filter- ing can actually capture similarities between the acoustic content of songs. However, when evaluators can see the names of the recommended songs and artists, we find that artist similarity can account for the performance of Genius. A system that combines these musical cues could generate music recommendations that are as good as Genius, even when collaborative filtering data is unavailable.

Great talk, lots of things to think about.

ismir2009-proceedings.pdf (page 370 of 775)


Leave a comment