Archive for category Music
10 Awesome things about ISMIR 2009
ISMIR 2009 is over – but it will not be soon forgotten. It was a wonderful event, with seemingly flawless execution. Some of my favorite things about the conference this year:
- The proceedings – distributed on a USB stick hidden in a pen that has a laser! And the battery for the laser recharges when you plug the USB stick into your computer. How awesome is that!? (The printed version is very nice too, but it doesn’t have a laser).
- The hotel – very luxurious while at the same time, very affordable. I had a wonderful view of Kobe, two very comfortable beds and a toilet with more controls than the dashboard on my first car.
- The presentation room – very comfortable with tables for those sitting towards the front, great audio and video and plenty of power and wireless for all.
- The banquet – held in the most beautiful room in the world with very exciting Taiko drumming as entertainment.
- The details – it seems like the organizing team paid attention to every little detail and request – they had taped numbers on the floor so that the 30 folks giving their 30 second pitches during poster madness would know just where to stand, to the signs on the coffeepots telling you that the coffee was being made, to the signs on the train to the conference center welcoming us to ISMIR 2009. It seems like no detail was left to chance.
- The food – our stomachs were kept quite happy – with sweet breads and pastries every morning, bento boxes for lunch, and coffee, juices, waters, and the mysterious beverage ‘black’ that I didn’t dare to try. My absolute favorite meal was the box lunch during the tutorial day – it was a box with a string – when you are ready to eat you give the string a sharp tug – wait a few minutes for the magic to do its job and then you open the box and eat a piping hot bowl of noodles and vegetables. Almost as cool as the laser-augmented proceedings.
- The city – Kobe is a really interesting city – I spent a few days walking around and was fascinated by it all. I really felt like I was walking around in the future. It was extremely clean, the people will very polite, friendly and always willing to help. Going into some parts of town was sensory overload, the colors, sounds, smells, the sights were overwhelming – it was really fun.
- the Keynote – music making robots – what more is there to say.
- The Program – the quality of papers was very high – there was some outstanding posters and oral presentations. Much thanks to George and Keiji for organizing the reviews to create a great program. (More on my favorite posters and papers in an upcoming post)
- f(mir) – The student-organized workshop looked at what MIR research would look like in 10, 20 or even 50 years (basically after I’m dead and gone). The presentations in this workshop were quite provactive – well done students!
I write this post as I sit in the airport in Osaka waiting for my flight home. I’m tired, but very energized to explore the many new ideas that I encountered at the conference. It was a great week. I want to extend my personal thanks to Professor Fujinaga and Professor Goto and the rest of the conference committee for putting together a wonderful week.
ISMIR Oral Session – Sociology and Ethnomusicology
Session Title: Sociology & Ethnomusicology
Session Chair: Frans Wiering (Universiteit Utrecht, Netherland)
Exploring Social Music Behavior: An Investigation of Music Selection at Parties
Sally Jo Cunningham and David M. Nichols
Abstract: This paper builds an understanding how music is currently listened to by small (fewer than 10 individuals) to medium-sized (10 to 40 individuals) gatherings of people—how songs are chosen for playing, how the music fits in with other activities of group members, who supplies the music, the hardware/software that supports song selection and presentation. This fine-grained context emerges from a qualitative analysis of a rich set of participant observations and interviews focusing on the selection of songs to play at social gatherings. We suggest features for software to support music playing at parties.
Notes:
- What happens at parties, especially informal small and medium sized parties
- Observations and interviews – 43 party observations
- Analyzing the data: key events that drive the activity, patterns of behavior, social roles
- Observations
- music selection cannot require fine motor movements (because of drinking and holding their drings) (Drinking dislexia)
- Need for large displays
- Party collection from different donors, sources, media
- Pre-party: host collection
- As party progresses: additional contributions (ipods, thumbdrives, etc)
- Challenge: bring together into a single browseable searchable collection
- Roles: Host, guest, guest of honor. Host provides initial collection, party playlist. High stress ‘guilty pleasures’
- Guests: may contribute, could insult the host, may modify party playlist if receive the invitation from the host. Voting jukeboxes may help
- Guest of Honor had ultimate control
- insertion into playlist, looking for specific song, type of song.
- Delete songs from playlist without disrupting the party
- Setting and maintaining atmosphere
- softer for starts, move to faster louder, ending with chilling out
- What next:other situations, long car ride
- Questions: Spotify turned into the best party
Great study, great presentation.
Music and Geography: Content Description of Musical Audio from Different Parts of the World
Emilia Gómez, Martín Haro and Perfecto Herrera
Abstract: This paper analyses how audio features related to different musical facets can be useful for the comparative analysis and classification of music from diverse parts of the world. The music collection under study gathers around 6,000 pieces, including traditional music from different geographical zones and countries, as well as a varied set of Western musical styles. We achieve promising results when trying to automatically distinguish music from Western and non-Western traditions. A 86.68% of accuracy is obtained using only 23 audio features, which are representative of distinct musical facets (timbre, tonality, rhythm), indicating their complementarity for music description. We also analyze the relative performance of the different facets and the capability of various descriptors to identify certain types of music. We finally present some results on the relationship between geographical location and musical features in terms of extracted descriptors. All the reported outcomes demonstrate that automatic description of audio signals together with data mining techniques provide means to characterize huge music collections from different traditions, complementing ethnomusicological manual analysis and providing a link between music and geography.
You Call That Singing? Ensemble Classification for Multi-Cultural Collections of Music Recordings
Polina Proutskova and Michael Casey
Abstract: The wide range of vocal styles, musical textures and re- cording techniques found in ethnomusicological field recordings leads us to consider the problem of automatic- ally labeling the content to know whether a recording is a song or instrumental work. Furthermore, if it is a song, we are interested in labeling aspects of the vocal texture: e.g. solo, choral, acapella or singing with instruments. We present evidence to suggest that automatic annotation is feasible for recorded collections exhibiting a wide range of recording techniques and representing musical cultures from around the world. Our experiments used the Alan Lomax Cantometrics training tapes data set, to encourage future comparative evaluations. Experiments were con- ducted with a labeled subset consisting of several hun- dred tracks, annotated at the track and frame levels, as acapella singing, singing plus instruments or instruments only. We trained frame-by-frame SVM classifiers using MFCC features on positive and negative exemplars for two tasks: per-frame labeling of singing and acapella singing. In a further experiment, the frame-by-frame classifier outputs were integrated to estimate the predominant content of whole tracks. Our results show that frame-by- frame classifiers achieved 71% frame accuracy and whole track classifier integration achieved 88% accuracy. We conclude with an analysis of classifier errors suggesting avenues for developing more robust features and classifier strategies for large ethnographically diverse collections.
ISMIR 2009 – The Future of MIR
Posted by Paul in ismir, Music, recommendation on October 29, 2009
This year ISMIR concludes with the 1st Workshop on the Future of MIR. The workshop is organized by students who are indeed the future of MIR.
09:00-10:00 Special Session: 1st Workshop on the Future of MIR
MIR, where we are, where we are going
Session Chair: Amélie Anglade Program Chair of f(MIR)
Meaningful Music Retrieval
Frans Wiering – [pdf]
Notes
- Some unfortunate tendencies: anatomical view of music – a dead body that we do autopsies, time is the loser Traditional production-oriented/
- Measure of similarity: relevance, surprise
- Few interesting applications for end-users
- bad fit to present-day musicological themes
- We are in the world of ‘pure applied research’ – no truth interdisciplinary between music domain knowledge and computer science.
- Music is meaningful (and the underlying personal motivation of most MIR researchers).
- Meaning in musicology – traditionally a taboo suject
- Subjectivity: an indivds. disposition to engage in social and cultural interactions
- Meaning generation process – we have a long-term memory for music –
- Can musical meaning provide the ‘big story line’ for MIR?
The Discipline Formerly Known As MIR
Perfecto Herrera, Joan Serrà, Cyril Laurier, Enric Guaus, Emilia Gómez and Xavier Serra
Intro: Our exploration is not a science-fiction essay. We do not try to imagine how music will be conceptualized, experienced and mediated by our yet-to-come research, technological achievements and music gizmos. Alternatively, we reflect on how the discipline should evolve to become consolidated as such, in order it may get an effective future instead of becoming, after a promising start, just a “would-be” discipline.Our vision addresses different aspects: the discipline’s object of study, the employed methodologies, social and cultural impacts (which are out of this long abstract because of space restrictions), and we finish with some (maybe) disturbing issues that could be taken as partial and biased guidelines for future research.
Notes: One motivation for advancing MIR – more banquets!
- MIR is no more about retrieval than computer science is about computers
- Music Information Retrieval – it’s too narrow
- Music Information or Information about Music?
- Interested in the interaction with music information
- We should be asking more profound questions
- music
- content tresasures in short musical exceprts, tracks performances etc.
- context
- music understanding systems
- Most metadata will be generated in the creation / production phase (hmm.. don’t agree necessarily, all the good metadata (tags, who likes what) is based on context and use which is post-hoc)
- Instead of automatic analysis – build systems to help humans help humans
- Music like water? or Music as dog!!! – a friend – companion –
- Personalization, Findability
- Music turing test
Good, provocative talk
Oral Session 2: Potential future MIR applications
Session Chair: Jason Hockman (McGill University), Program Chair of f(MIR)
Machine Listening to Percussion: Current Approaches and Future Directions – [pdf]
Michael Ward
Abstract: approaches have been taken to detect and classify percussive events within music signals for a variety of purposes with differing and converging aims. In this paper an overview of those technologies is presented and a discussion of the issues still to overcome and future possibilities in the field are presented. Finally a system capable of monitoring a student drummer is envisaged which draws together current approaches and future work in the field.
Notes:
- Challengs: Onset detection of isolated drum strokes
- Onset detection and classification of overlapping drum sounds
- Onset detection and classification in the presence of other instruments
- Variability in Percussive sounds . Dozens of criteria effect the sounds produced (strike velocity, angle, position etc.)
- Future Research Areas
- Extension of recognition to include the wide variety of strokes. (open hh, half-open hh, hh foot splash etc)
MIR When All Recordings Are Gone: Recommending Live Music in Real-Time – [pdf]
Marco Lüthy and Jean-Julien Aucouturier
Recommending live and short lived events. Bandsintown, Songkick, gigulate … pay attention to this paper.
Notes:
- Recommendation for live music in real-time
- Coldplay -> free album when you get a ticket to a coldplay concert – give away the music
- NIN -> USB keys in the toilet – which had strange recording on the file – strange sounds – an FFT of the sounds showed phone number and GPS coordinates – turned into a treasure hunt to a NIN nails concert.
- Komuso Tokugawa – an avatar for a musiciaon in second life. Plays in second life, twitters concert announcements (playing wake for Les Paul in 3 minutes)
- ‘How do we get there in time?’
- JJ walked through how to implement a recommender system in second life
- Implicit preference inferred from how long your avatar listens to a concert (Nicole Yankelovich at Sun Labs should look at this stuff)
- Great talk by JJ – full of energy – neat ideas. Good work.
Poster Session
- Global Access to Ethnic Music: The Next Big Challenge?
Olmo Cornelis, Dirk Moelants and Marc Leman - The Future of Music IR: How Do You Know When a Problem Is Solved?
Eric Nichols and Donald Byrd
ISMIR Oral Session 7 – Harmonic & Melodic Similarity and Summarization
10:30-12:30 Oral Session (OS7) – Harmonic & Melodic Similarity and Summarization
Session Chair: Emilia Gómez (Universitat Pompeu Fabra, Spain)
Nicola Orio and Antonio Rodà
- high-level music dimensions are not reliably computed from audio
- musicologists are more interested in scores
- results with symbolic formats can be a reference for audio-based approaches
- melodic similarity is not a solved problem
- Overview of the approach:
by Bas de Haas, Martin Rohrmeier, Remco Veltkamp and Frans Wiering
- Extract chord labels from audio and symbolic data (not the research focus)
- Not all info is in the data. Need a grammatical model of tonal harmony
- Conveying musical structure (slowing down at boundaries for example)
- Prosody ( stress, direction, grouping) – the heart of the matter
- Musical Affect (happy, sad, etc) – not easy, so ignores this one
This was a very good talk.
Maksim Khadkevich and Maurizio Omologo
Sam Ferguson and Densil Cabrera
Cynthia C.S. Liem and Alan Hanjalic
ISMIR Keynote – Wind instrument-playing humanoid robots
What’s not to love!?!? Robots and Music! This was a great talk.
Wind instrument-playing humanoid robots
Atsuo Takanishi
Some history of robots:
Wabot-2 – early music playing robot
Wabian-2 – walking robots
Emotional Robots
Kobian: Emotional humanoid robot
Voice Producing Robots
Music Performance Robots
(Compare)
Google’s new music search
The news wires are abuzz with Google’s new music search feature. The new Google feature will allow users to search for an artist, song, album or lyric and get a music result that will include album art and a ‘play’ button that will let you listen to the music. MySpace and Lala will be serving up the music and you’ll be able to play any song in full just once. The music results will also include links to Pandora, imeem and Rhapsody. Lyrics search is provided by Gracenote.
Here’s the video announcement:
It’s about time that Google starts to include the ability to listen to search results – this will help. It’s pretty cool, but I don’t think it changes the music discovery game too much. Search is not discovery.
Update: The Register is particularly unimpressed: “Trying to forcefeed punters a lousy service is a bad idea, amplified by the assumption that if Facebook and Google are the feeding tube, we’ll suck it up.”
The SQL Join is destroying music
Posted by Paul in events, Music, The Echo Nest on October 28, 2009
Brian Whitman,one of the founders of the Echo Nest, gave a provocative talk last week at Music and Bits. Some excerpts:
Useless MIR Problems:
- Genre Identification – “Countless PhDs on this useless task. Trying to teach a computer a marketing construct”
Hard but interesting MIR Problems:
- Finding the saddest song in the world
- Predicting Pitchfork and All Music Guide ratings
- Predicting the gender of a listener based upon their music taste
On Recommendation:
- “The best music experience is still very manual… I am still reading about music, not using a recommender.”
- “If we only used collaborative filtering to discover music, the popular artists would eat the unknowns alive.”
- “The SQL Join is destroying music”
Brian’s notes on the talk are on his blog. The slides are online here. Highly recommended:
ISMIR Oral Session 6 – Similarity
Oral Session 6 – Similarity
Chair: Roger Dannenberg
ON RHYTHM AND GENERAL MUSIC SIMILARITY
Tim Pohle, Dominik Schnitzer, Markus Schedl, Peter Knees and Gerhard Widmer
Paper: pdf
Abstract: The contribution of this paper is threefold:
First, we propose modifications to Fluctuation Patterns [14]. The resulting descriptors are evaluated in the task of rhythm similarity computation on the “Ballroom Dancers” collection.Second, we show that by combining these rhythmic descriptors with a timbral component, results for rhythm similarity computation are improved beyond the level obtained when using the rhythm descriptor component alone.Third, we present one “unified” algorithm with fixed parameter set. This algorithm is evaluated on three different music collections. We conclude from these evaluations that the computed similarities reflect relevant aspects both of rhythm similarity and of general music similarity. The performance can be improved by tuning parameters of the “unified” algorithm to the specific task (rhythm similarity / general music similarity) and the specific collection, respectively.
Notes:
- B&O recommender used OFAI
- Nice results
GROUPING RECORDED MUSIC BY STRUCTURAL SIMILARITY
Juan Pablo Bello
Paper: PDF
Abstract: This paper introduces a method for the organization of recorded music according to structural similarity. It uses the Normalized Compression Distance (NCD) to measure the pairwise similarity between songs, represented using beat-synchronous self-similarity matrices. The approach is evaluated on its ability to cluster a collection into groups of performances of the same musical work. Tests are aimed at finding the combination of system parameters that improve clustering, and at highlighting the benefits and shortcomings of the proposed method. Results show that structural similarities can be well characterized by this approach, given consistency in beat tracking and overall song structure.
Notes:
- Normalized Compression Distance (NCD) a universal distance metric.
- Experimental setup – all classical music
A FILTER-AND-REFINE INDEXING METHOD FOR FAST SIMILARITY SEARCH IN MILLIONS OF MUSIC TRACKS
Dominik Schnitzer, Arthur Flexer, Gerhard Widmer
Paper: PDF
ABSTRACT We present a filter-and-refine method to speed up acous- tic audio similarity queries which use the Kullback-Leibler divergence as similarity measure. The proposed method rescales the divergence and uses a modified FastMap [1] implementation to accelerate nearest-neighbor queries. The search for similar music pieces is accelerated by a fac- tor of 10−30 compared to a linear scan but still offers high recall values (relative to a linear scan) of 95 − 99%. We show how the proposed method can be used to query several million songs for their acoustic neighbors very fast while producing almost the same results that a linear scan over the whole database would return. We present a work- ing prototype implementation which is able to process sim- ilarity queries on a 2.5 million songs collection in about half a second on a standard CPU.
Notes: Gaussian similarity features can be expensive.
ISMIR Oral Session 5 – Tags
Oral Session 5 – Tags
Session Chair: Paul Lamere
I’m the session chair for this session, so I can’t keep notes. So instead I offer the abstracts.
TAG INTEGRATED MULTI-LABEL MUSIC STYLE CLASSIFICATION WITH HYPERGRAPH
Fei Wang, Xin Wang, Bo Shao, Tao Li Mitsunori Ogihara
Abstract: Automatic music style classification is an important, but challenging problem in music information retrieval. It has a number of applications, such as indexing of and search- ing in musical databases. Traditional music style classifi- cation approaches usually assume that each piece of music has a unique style and they make use of the music con- tents to construct a classifier for classifying each piece into its unique style. However, in reality, a piece may match more than one, even several different styles. Also, in this modern Web 2.0 era, it is easy to get a hold of additional, indirect information (e.g., music tags) about music. This paper proposes a multi-label music style classification ap- proach, called Hypergraph integrated Support Vector Ma- chine (HiSVM), which can integrate both music contents and music tags for automatic music style classification. Experimental results based on a real world data set are pre- sented to demonstrate the effectiveness of the method.
EASY AS CBA: A SIMPLE PROBABILISTIC MODEL FOR TAGGING MUSIC
Matthew D. Hoffman, David M. Blei, Perry R. Cook
ABSTRACT Many songs in large music databases are not labeled with semantic tags that could help users sort out the songs they want to listen to from those they do not. If the words that apply to a song can be predicted from audio, then those predictions can be used both to automatically annotate a song with tags, allowing users to get a sense of what qualities characterize a song at a glance. Automatic tag prediction can also drive retrieval by allowing users to search for the songs most strongly characterized by a particular word. We present a probabilistic model that learns to predict the probability that a word applies to a song from audio. Our model is simple to implement, fast to train, predicts tags for new songs quickly, and achieves state-of-the-art performance on annotation and retrieval tasks.
USING ARTIST SIMILARITY TO PROPAGATE SEMANTIC INFORMATION
Joon Hee Kim, Brian Tomasik, Douglas Turnbull
ABSTRACT Tags are useful text-based labels that encode semantic information about music (instrumentation, genres, emotions, geographic origins). While there are a number of ways to collect and generate tags, there is generally a data sparsity problem in which very few songs and artists have been accurately annotated with a sufficiently large set of relevant tags. We explore the idea of tag propagation to help alleviate the data sparsity problem. Tag propagation, originally proposed by Sordo et al., involves annotating a novel artist with tags that have been frequently associated with other similar artists. In this paper, we explore four approaches for computing artists similarity based on dif- ferent sources of music information (user preference data, social tags, web documents, and audio content). We com- pare these approaches in terms of their ability to accurately propagate three different types of tags (genres, acoustic de- scriptors, social tags). We find that the approach based on collaborative filtering performs best. This is somewhat surprising considering that it is the only approach that is not explicitly based on notions of semantic similarity. We also find that tag propagation based on content-based mu- sic analysis results in relatively poor performance.
MUSIC MOOD REPRESENTATIONS FROM SOCIAL TAGS
Cyril Laurier, Mohamed Sordo, Joan Serra, Perfecto Herrera
ABSTRACT This paper presents findings about mood representations. We aim to analyze how do people tag music by mood, to create representations based on this data and to study the agreement between experts and a large community. For this purpose, we create a semantic mood space from last.fm tags using Latent Semantic Analysis. With an unsuper- vised clustering approach, we derive from this space an ideal categorical representation. We compare our commu- nity based semantic space with expert representations from Hevner and the clusters from the MIREX Audio Mood Classification task. Using dimensional reduction with a Self-Organizing Map, we obtain a 2D representation that we compare with the dimensional model from Russell. We present as well a tree diagram of the mood tags obtained with a hierarchical clustering approach. All these results show a consistency between the community and the ex- perts as well as some limitations of current expert models. This study demonstrates a particular relevancy of the basic emotions model with four mood clusters that can be sum- marized as: happy, sad, angry and tender. This outcome can help to create better ground truth and to provide more realistic mood classification algorithms. Furthermore, this method can be applied to other types of representations to build better computational models.
EVALUATION OF ALGORITHMS USING GAMES: THE CASE OF MUSIC TAGGING
Edith Law, Kris West, Michael Mandel, Mert Bay, J. Stephen Downie
Abstract Search by keyword is an extremely popular method for retrieving music. To support this, novel algorithms that automatically tag music are being developed. The conventional way to evaluate audio tagging algorithms is to com- pute measures of agreement between the output and the ground truth set. In this work, we introduce a new method for evaluating audio tagging algorithms on a large scale by collecting set-level judgments from players of a human computation game called TagATune. We present the de- sign and preliminary results of an experiment comparing five algorithms using this new evaluation metric, and con- trast the results with those obtained by applying several conventional agreement-based evaluation metrics.
ISMIR Poster Madness #3
- (PS3-1) Automatic Identification for Singing Style based on Sung Melodic Contour Characterized in Phase Plane
Tatsuya Kako, Yasunori Ohishi, Hirokazu Kameoka, Kunio Kashino and Kazuya Takeda - (PS3-2) Automatic Identification of Instrument Classes in Polyphonic and Poly-Instrument Audio
Philippe Hamel, Sean Wood and Douglas Eck
Looks very interesting - (PS3-3) Using Regression to Combine Data Sources for Semantic Music Discovery
Brian Tomasik, Joon Hee Kim, Margaret Ladlow, Malcolm Augat, Derek Tingle, Rich Wicentowski and Douglas Turnbull - (PS3-4) Lyric Text Mining in Music Mood Classification
Xiao Hu, J. Stephen Downie and Andreas Ehmann
lyrics and modod – surprising results! - (PS3-5) Robust and Fast Lyric Search based on Phonetic Confusion Matrix
Xin Xu, Masaki Naito, Tsuneo Kato and Hisashi Kawai
Phonetic confusion – misheard lyrics! KDDI – must see this. - (PS3-6) Using Harmonic and Melodic Analyses to Automate the Initial Stages of Schenkerian Analysis
Phillip Kirlin
Schenkerian analysis – what is this really? - (PS3-7) Hierarchical Sequential Memory for Music: A Cognitive Model
James Maxwell, Philippe Pasquier and Arne Eigenfeldt
Cognitive model for online learning. - (PS3-8) Additions and Improvements in the ACE 2.0 Music Classifier
Jessica Thompson, Cory McKay, J. Ashley Burgoyne and Ichiro Fujinaga
Open source MIR in java - (PS3-9) A Probabilistic Topic Model for Unsupervised Learning of Musical Key-Profiles
Diane Hu and Lawrence Saul
topic mode for key finding - (PS3-10) Publishing Music Similarity Features on the Semantic Web
Dan Tidhar, György Fazekas, Sefki Kolozali and Mark Sandler
SoundBite – distributed feature collection - (PS3-11) Genre Classification Using Bass-Related High-Level Features and Playing Styles
Jakob Abesser, Hanna Lukashevich, Christian Dittmar and Gerald Schuller
semantic features - (PS3-12) From Multi-Labeling to Multi-Domain-Labeling: A Novel Two-Dimensional Approach to Music Genre Classification
Hanna Lukashevich, Jakob Abeßer, Christian Dittmar and Holger Großmann
Fraunhofer – autotagging - (PS3-13) 21st Century Electronica: MIR Techniques for Classification and Performance
Dimitri Diakopoulos, Owen Vallis, Jordan Hochenbaum, Jim Murphy and Ajay Kapur
Automated ISHKURS with multitouch – woot - (PS3-14) Relationships Between Lyrics and Melody in Popular Music
Eric Nichols, Dan Morris, Sumit Basu and Chris Raphael
Text features vs melodic features – where do the stressed syllables fall - (PS3-15) RhythMiXearch: Searching for Unknown Music by Mixing Known Music
Makoto P. Kato
Looks like an echo nest remix: AutoDJ - (PS3-16) Musical Structure Retrieval by Aligning Self-Similarity Matrices
Benjamin Martin, Matthias Robine and Pierre Hanna
` - (PS3-17) Exploring African Tone Scales
Dirk Moelants, Olmo Cornelis and Marc Leman
No standardized scales – how do you deal with that? - (PS3-18) A Discrete Filter Bank Approach to Audio to Score Matching for Polyphonic Music
Nicola Montecchio and Nicola Orio - (PS3-19) Accelerating Non-Negative Matrix Factorization for Audio Source Separation on Multi-Core and Many-Core Architectures
Eric Battenberg and David Wessel
Runs NMF on GPUs and openMP - (PS3-20) Musical Models for Melody Alignment
Peter van Kranenburg, Anja Volk, Frans Wiering and Remco C. Veltkamp
alignment of folks songs - (PS3-21) Heterogeneous Embedding for Subjective Artist Similarity
Brian McFee and Gert Lanckriet
Crazy ass features! - (PS3-22) The Intersection of Computational Analysis and Music Manuscripts: A New Model for Bach Source Studies of the 21st Century
Masahiro Niitsuma, Tsutomu Fujinami and Yo Tomita



















