As the relationship between music and technology grows stronger, researchers are finding new and innovative ways of interacting with music. Matthias Mauch, a researcher in music informatics at Queen Mary University of London, has explored a number of these complexities at great length.
As daunting as it sounds, music informatics simply involves research in areas such as the automatic transcription of music, chords, and chord progressions; key detection; and music classification. Since completing his PhD on audio chord transcription (Mauch, 2010), Mauch has been involved in projects like Songle (a lyrics-to-audio alignment program, http://songle.jp/), Driver’s Seat (an application with Last.fm that allows users to search for songs based on a variety of different musical factors, currently not publicly available) and DarwinTunes (an interactive program that uses the concept of evolution in music, http://darwintunes.org/).
While at the National Institute of Advanced Industrial Science and Technology in Japan, Mauch helped in creating a program (Song Prompter) that aligns chords and lyrics from just audio input and a text file containing the lyrics and chords (Mauch, Fujihara, & Goto, 2010; Mauch, Fujihara, & Goto, 2011). Once the audio is received by the program, it generates a visual display of the chords and lyrics of the audio input. This program was further modified to be used on a web interface called Songle. Songle allows users to listen to the songs and correct the Song Prompter output if there is, for example, an incorrect chord change (Goto, Yoshii, & Fujihara, 2011).
His next major project, Driver’s Seat, was a Spotify application for Last.fm. Driver’s Seat enables users to search for music by musical factors, as opposed to just by genre or similarity. For example, listeners can set the application to search for music that has a specific tempo, loudness, energy, percussiveness, ‘danceability’, and various other musical parameters. The application then provides a playlist that meets the requirements set out by the user. Driver’s Seat also includes specific presets, allowing users the option of listening to music with specific characteristics (e.g. music in A minor or music with complex rhythms).
In contrast to earlier projects, Mauch was also involved in the development of DarwinTunes, a model of evolution using music (MacCallum, Mauch, Burt, & Leroi, 2012). Sinusoidal wave forms undergo a process of evolution carried out by genetic algorithms and voter selection to create new music. The web interface of this project allows users to vote for their preferred sinusoidal wave forms on a 1-5 scale. Subsequently, the most popular wave forms “have sex” with each other creating “baby-loops”, while the least popular wave forms are filtered out. This is the process of a musical model of sexual reproduction with low level mutation. After only 150 generations, a steady rhythm could be identified. After 500 generations, the sound became more pleasant, with simple major chord harmonies. After about one thousand generations, very simple melodies began to emerge as the sound textures got more complex. Although 2000 generations didn’t produce much change in musicality, there were new sounds and short melodies. At 3000 generations, the loops had developed into complex, intertwining melodies with rhythmic accompaniment. As a musical model of evolution, DarwinTunes has provided us with a scientific application of music informatics research. This model demonstrates the evolution of culture in the absence of human creativity, solely based on listen selection.
Now at Queen Mary, Mauch has focused on the phenomenon of intonation drift; more specifically, the accuracy of pitch in singing. How and why do people sing out of tune, or even in tune? How do solo singers drift in intonation over a piece of music? Mauch’s latest research project explored the rate and the extent of which peoples’ pitch accuracy shifted while singing happy birthday (without words) three times. So far, the research has shown that most people drift very little in the overall pitch level (most people fall within half a semi-tone up or down over all 75 notes of the 3 repetitions of Happy Birthday). But interestingly, Mauch and his collaborators found that pitch error per note is quite large, especially in comparison to overall intonation drift. This is surprising because intuition would suggest that if pitch error per note is large, that would lead the overall intonation to shift as well. Despite this intuition, it appears that singers are somehow compensating for errors they make on individual pitch intervals. Mauch and his colleagues are still working on models that will explain these results.
Mauch’s presentation showed us some of the research currently being carried out in music informatics. He demonstrated how music can be used as a unique approach to investigate areas typically not considered to be of a musical nature, such as evolution. Mauch’s recent singing research brought us hope, suggesting that we may sing better than we think. However, we still don’t know why this disconnect between actual singing and our judgment of our singing occurs and how it can be connected to specific cognitive processes. As Mauch suggests, perhaps through collaboration with musicians, psychologists, and ethnomusicologists, we will soon be able to answer these questions while discovering even more practical applications of music informatics.
For more information on Mauch’s work visit: http://matthiasmauch.net/
Anita Paas and Angúl Castro
Goto, M., Yoshii, K., & Fujihara, H. (2011). Songle: a web service for active music listening improved by user contributions. Proceedings of the 12th International Conference on Music Information Retrieval, 311-316.
Mauch, M. (2010). Automatic chord transcription using computational models of music context. Unpublished doctoral dissertation, Queen Mary, University of London.
Mauch, M., Fujihara, H., & Goto, M. (2010). Song prompter: an accompaniment system based on the automatic alignment of lyrics and chords to audio.
Mauch, M., Fujihara, H., & Goto, M. (2011). Song prompter: an interactive performance assistant with scrolling lyrics and chord display, In press.
MacCallum, R. M., Mauch, M., Burt, A., & Leroi, A. M. (2012). Evolution of music by public choice. Proceedings of the National Academy of Sciences of USA , 109(30), 12081-12086.