“The seductiveness of music lies in its ability to titillate the senses”: Elaine Chew on musical structure

Think about the last time a piece of music took you by surprise… What triggered it? How did you feel? Did others react the same way? You might become aware of musical structure through an established rhythmic pattern or a subverted harmonic expectation. It exists at various levels within a piece of music, from short motifs, through to longer patterns.

Structure is an integral music component and indeed, music is often described as  “organised sound” (Varèse, cited in Goldman, 1961). Composers conceive and organise structure, performers express it, and listeners decipher it. Differences in our perception of these structures will dictate our expectation of the forthcoming music and alter our individual experiences of music.

So how can we make sense of our musical experiences by analysing and quantifying structure? Elaine Chew is a self-described “mathemusical scientist” (Chew, 2016, p. 37). Her research on musical structure spans conceptual art through to mathematical modelling and gives new insights into music perception. She spoke to Goldsmiths’ Music, Mind, and Brain MSc students about the perception and apperception of musical structure.


“When practise becomes performance”
(Chew & Child, 2014)
Sight reading as a means of structural insight

The process of sight-reading requires an array of neurological and motor functions, including “perception (de-coding note patterns), kinesthetics (executing motor programs), memory (recognising patterns) and problem-solving skills (improvising and guessing)” (Parncutt & McPhereson, 2002, p. 78). This reliance on pattern decoding means that sight-reading could provide insight into a performer’s initial comprehension of musical structure.

Pic 1

Source: Chew, 2013 (click for enlarged image)

Prior to the nineteenth century, public performances of music usually consisted of scores being performed at first sight, without practice (Parncutt & McPhereson, 2002). Nowadays, music is usually painstakingly rehearsed beforehand. In 2013, Elaine Chew worked with composer Peter Child and conceptual artist Lina Viste Grønli to challenge our expectations of performance. After a visit to the Berlin Philharmonic, Viste Grønli found her thoughts fixated on the musicians’ warming-up “performance” and began questioning how these chaotic, unplanned sounds could be captured. What followed was Practising Haydn. Chew was recorded practising Haydn’s Piano Sonota in E Flat, and the session was then meticulously transcribed to create a new score and publicly performed.

Screen Shot 2018-04-10 at 15.48.12

Source: Chew, 2013

The transcribed practise session and its comparison to Haydn’s original score leaves a fascinating trace of the cognitive processing of musical structure. Childs’ score is full of metrical changes, repetitions, pauses, and interruptions – quite unlike anything you’d expect in a piece of Haydn’s music. These alterations mark structural points at which musical patterns and expectations are subverted. And this process is an example of one type of conceptual analysis into structure through the composer/performer relationship.


“How music works, why music works and […] how to make music work”
(Chew, 2016, p 38)
Modelling musical structure and expectancy

In order to better understand the processes that leads the composer and listener to create and perceive musical boundaries, it is important to develop mathematical models of music cognition that describe variations of musical expectancy and tension (Huron, 2006). Tension can be induced by both tonal and temporal patterns. Also, the musical properties that comprise such patterns are multidimensional and dynamic (Herremans & Chew, 2016), which means they can be difficult to model accurately.

Chew argues that important parallels can be drawn between our understanding of the physical world and our experience of musical structure (Chew, 2016). As she explains, people can imagine and describe forms of physical movement with ease: What does it feel like to march in a muddy swamp? How vividly can you remember your first time accelerating down a ski slope or on a rollercoaster? Can you picture the swooping sensation of a falcon changing its course in flight? For most people, these thought experiments are intuitive. Composers can, therefore, draw from our common knowledge of the physical world to design equally vivid musical gestures.

More importantly, concepts from physics can constitute a reliable framework to describe musical structure. In the same way that physicists use mathematics to model physical phenomena, mathemusical scientists can describe the musical world in mathematical terms. They can use mathematical modelling techniques from physics to develop more accurate mathematical models of music perception: Chew uses the concept of gravity and the properties of Newtonian mechanics to model the dynamics of tonal tension and the effect of musical pulse on expectancy.

The spiral array model of tonality is a geometric representation of tonal space, which represents pitch classes, chords and keys, where each pitch class corresponds to spatial coordinates along a helix.

Screen Shot 2018-04-10 at 15.49.56

Source: Chew, 2016, p 44

Newton’s law of gravitation allows us to localise the centre of gravity of a non-uniform object by integrating the weight of all the points in that object. Accordingly, the gravitational pull is concentrated at the center of gravity of a given object and the gravitational force between two objects is inversely proportional to the square of the distance between these objects. In mathematical terms, we have:

Screen Shot 2018-04-10 at 15.51.21

Likewise, the tonic is the centre of gravity of the given tonal context – also defined as the “center of effect”. As the tonic changes within the modulations of the harmonic structure, the centre of effect will change accordingly. Tones that are harmonically distant from the centre of effect induce tension, and tones that are closer to it allow for resolution. However, tonal tension that is distant from the centre of effect is less like gravity and more like an elastic force, which was also defined by Newton. Hence, tones moving apart and towards the tonal centre in time create a musical narrative (see Herremans’ and Chew’s paper on tension ribbons for more information).

Tonality is not the only parameter that shapes our experience of musical tension. Through her Expression Synthesis Project (Chew et al., 2005), Chew also illustrates how timing and beat can affect expectancy, and how this can be modelled according to Newtonian mechanics. The pull is dictated by the musical pulse, and Newton’s three laws of motion are used to operationalise timing, where the time elapsed between two beats are analogous to the distance between two points in space (Chew, 2016).

Chew’s work on the mathematical modelling of musical structure allowed her and her colleague Alexandre Francois to develop MuSA.RT software, which can analyse the tonal structure of a given musical piece and provide a corresponding graphical representation using the spiral array model. In this video, Chew demonstrates how the software responds to musical information:

Another exciting application of Chew’s work is its potential for artificial music composition. MorpheuS is an automatic music generation system developed by Chew and her colleague Dorien Herremans which uses machine-learning techniques and pattern detection algorithms in conjunction with Chew’s tonal tension model to produce novel music in a specific style or in the combination of several styles. For instance, below is a recording of three pieces morphed from A Little Notebook for Anna Magdalena by J. S. Bach and three pieces morphed from 30 and 24 Pieces for Children by Kabalevsky.


“Listening as a creative act”
(Smith, Schankler & Chew, 2014)
Individual differences in structure perception

Whilst music perception and cognition can be tracked to a certain extent, Chew emphasises our individual differences. Musical structure arises from parameters on different layers, which give the listener enough space for interpretation. Attention seems to play a crucial role in shaping perception and untangling ambiguities, which is in turn influenced by the personal listening history and expectations (Smith et al., 2014a). As music listening and making require the integration of diverse musical parameters (Herremans & Chew, 2016), researchers’ predictions of personal experience are limited by diverging musical features we deem relevant.

Perception of musical boundaries, for example, is predictable from novelty peaks, which capture the extent to which different musical features change over time (Smith et al., 2014b). Timbre, harmony, key, rhythm or tempo might be decisive. And again, boundary and novelty annotations by listeners reveal individual deviances across those musical parameters. Therefore, not every novelty peak makes us perceive a structural boundary, as personal attention obscures physical events. Some theories of structure perception such as Lerdahl & Jackendoff’s (1983) Generative Theory of Tonal Music, ascribe gestalt rules to the process, but research suggests that our perceptions vary from person to person (Smith et al., 2014a). When repeatedly exposed to the same musical piece, people even disagree with themselves about structure (Margulis, 2012)!

Structure is often ambiguous – particularly in improvised music – so it’s important to remember that our perception is flexible. In Practising Haydn the transcription process was open to interpretation – how and why did the composer decide which changes warranted formal transcription? This ambiguity of structural boundaries is likely due to the multidimensional complexity of musical patterns and the aggregate nature of the perceptual process.

These projects emphasise the creative nature of listening, the breadth of Chew’s work, and the important role that structure plays in our understanding of music perception and cognition in general. Next time you’re listening to that exciting piece of music, take a minute to remember how complex and unique your experience may be.

Lena Esther Ptasczynski, Fran Board, and Paul Bejjani

References
Chew, E., François, A., Liu, J., Yang, A. (2005). ESP: A Driving Interface for Expression Synthesis. Proceedings of the 2005 International Conference on New Interfaces for Musical Expression (NIME05), Vancouver, BC, Canada, 224-227.
Chew, E. (2013). About practising Haydn. Retrieved January 31, 2018, from http://elainechew-piano.blogspot.co.uk/2013/09/about-practicing-haydn.html
Chew, E., & Child, P. (2014). Multiple Sense Making: When Practice Becomes Performance. Cambridge, UK: Cambridge University.
Chew, E. (2016). Motion and gravitation in the musical spheres (in Mathemusical Conversations: Mathematics and Computation in Music Performance and Composition). (J. Smith, E. Chew, & G. Assayag, Eds.). Singapore: World Scientific Publishing Company.
Goldman, R. F. (1961). Varèse: Ionisation; Density 21.5; Intégrales; Octandre; Hyperprism; Poème Electronique. Instrumentalists, cond. Musical Quarterly, 47(133–134), Robert Craft. Columbia MS 6146 (stereo).
Herremans, D., & Chew, E. (2016). Tension ribbons: Quantifying and visualising tonal tension. Second International Conference on Technologies for Music Notation and Representation, 8–18.
Huron, D. (2006). Sweet Anticipation: Music and the Psychology of Expectation. MIT Press.
Lerdahl, F., & Jackendoff, J. (1983). Generative Theory of Tonal Music. Cambridge, MA: MIT Press.
Margulis, E. H. (2012). Musical Repetition Detection Across Multiple Exposures. Music Perception: An Interdisciplinary Journal, 29(4), 377–385. https://doi.org/10.1525/mp.2012.29.4.377
Parncutt, R., & McPherson, G. E. (2002). The Science and Psychology of Music Performance Creative Strategies for Teaching and Learning. Research Studies in Music Education, 19(1), 78–78. https://doi.org/10.1177/1321103X020190010803
Smith, J., Shankler, I., & Chew, E. (2014a). Listening as a Creative Act: Meaningful Differences in Structural Annotations of Improvised Performances. Society for Music Theory, 20(3). http://www.mtosmt.org/issues/mto.14.20.3/mto.14.20.3.smith_schankler_chew.html.
Smith, J., Chuan, C.-H., Chew, Chew, E. (2014b). Audio properties of perceived boundaries in music. IEEE Transactions on Multimedia. Special Issue on Music Data Mining, 16(5), 1219-1228.
Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s