Garden Design, Birdsong and Creativity

The psychology of aesthetics and creativity finds footing in numerous disparate realms. However, researchers like Paul Sowden at the University of Winchester have found common ground in the realm of nature. His interests lie in the cognitive and affective mechanisms of creative and restorative thinking processes, which he describes within the contexts of natural environments. Creativity in garden design, beauty and serenity in natural settings, and the role of birdsongs in producing such experiences all converge in the investigation of the experience of nature.

The Creative Process in the Context of Garden Design

Picture1

 (Garden Design: https://harewood.org/explore/capability-brown-300-festival-2016/, 2018)

Contemporary art challenges boundaries. Researchers have shifted their attention from mainstream manifestations of traditional artistic practices, such as music and visual arts, and have conducted studies on human behaviour in relation to less well-known art forms. With his students and colleagues, Paul Sowden exemplifies this alternative viewpoint by studying the creative processes employed during garden design (Pringle & Sowden, 2017), following a personal attachment to gardens and nature. He sees garden design as an overlooked art form and a rich and complex source of creative inspiration.

Sowden is interested in creative thinking as a dual process comprising of an associative mode and an analytical mode. The associative mode involves memory retrieval, generating ideas and concepts, and insight (‘aha’) experiences. In contrast, the analytical mode involves logical deduction, evaluation of remembered experiences and past behaviour, and evaluation of design ideas and concepts. Sowden studies these two modes of the creative process using a participant sample consisting of professional garden designers, garden design students, professional fine artists, and members of non-academic staff with lower creative achievement scores on the Creative Achievement Questionnaire (CAQ)* (Carson, Peterson, & Higgins, 2005) who were used as the non-artist control group.

Figure1

Figure 1. Means of transition probabilities for each modal transition (Pringle & Sowden, 2017)

In a first experiment (Pringle & Sowden, 2017), participants were asked to design a garden with the theme ‘journey’ and were instructed to express their thoughts verbally. Pringle & Sowden developed a coding scheme to deconstruct participants’ verbal reports and found no difference between each group and how they transitioned between thinking modes (see Fig. 1). However, when comparing transitions between affective and cognitive thought within these thinking modes, findings revealed that the group of professional garden designers switched between analytic affective, and associative cognitive more than the non-artist control group (see Fig. 2).

Figure2

Figure 2. Means of transition probabilities for each modal transition including affective and cognitive states (Pringle & Sowden, 2017).

Sowden also explores the impact of meshed thinking in the context of flexibility (shifting between designs and/or producing more designs). Building on dual process theory as applied in psychology, according to which thought (in this case, creative thinking) is the result of two different cognitive processes (analytic and associative), Sowden uses a coding scheme based on two models of meshed thinking. Instances of meshed thinking occur when (1) both associative cognitive and analytic cognitive modes can be clearly identified in a text segment of a participant’s verbal report, and (2) there is clear textual evidence of both associative cognitive and analytic affective modes. Sowden’s research shows that, overall, professional garden designers carry out more meshed thinking, with meshed associative cognitive and analytic cognitive thought occurring more often than meshed associative cognitive and analytic affective thought. Interestingly, his research also shows that meshed analytic affective-associative cognitive thought correlates with design quality and creativity. Shifting from analytic affective to associative cognitive modes of thought is related to increased probability of initiating a transition between designs, and, the more shifting between designs occurs, the more the design outcome correlates with creativity and final design quality–with affect playing an important part in the shifting process. Therefore, including affect in the explanatory model for creative thinking in garden design increases its explanatory power.

The importance of meshed analytic affective-associative cognitive thought is also backed by neuroimaging research (Beaty et al., 2014). There is also evidence that activity in the medial orbitofrontal cortex (mOFC) implicates affective and associative processing and is connected not only to each of these processes separately but to a combination of both. Positive affective states are linked with disinhibited association (i.e. greater readiness to form or activate associations between stimuli) so that objects valued affectively as positive are more likely to facilitate associative activation and better mood (Shenhav, Barrett and Bar, 2014; Bar, 2009). If the mOFC plays a role in representing specific states and stimulus contexts associated with an experience one has learned to feel and recognise as rewarding, it may integrate value representations and their contingent contexts, thereby providing stronger association (Shenhav, Barrett and Bar, 2014). Sowden’s analysis spells out one of the possible behavioural effects of our ability to affectively hone in on an object from the material environment to trigger high reward associative processes. Seeking that mood reward, we learn to shape the material environment into a form which reflects our affectively invested associations between self and evocative objects.

 

Natural Environments: Cognitive and Affective Restoration

Although we may not immediately recognise them, external environments can have strong psychological effects on even the most resilient minds. The bustle of a city street, the tranquillity of a forest stream, or the excitement of a thunderstorm influence what we feel and think. Sunsets over rolling hills and dew-covered ferns make their way onto computer desktops and smartphone backgrounds, and noise-cancelling headphones block out the auditory chaos of urban settings. Where does this ubiquitous sense of calm and restoration in association with natural environments (Berman, Jonides, & Kaplan, 2008; Valtchanov, Barton, & Ellard, 2010; Kaplan & Talbot, 1983) come from?

Picture2

(Photo: https://www.drkarafitzgerald.com/2016/07/31/whats-natural-restorative-ahhhhhh-environment/, 2019)

Researchers like Paul Sowden explain this restorative potential in terms of attention. The psychology of attention and its absence dates back to William James’s (1892) “voluntary attention”: an attentional mechanism that requires effort, can be voluntarily controlled and involves inhibition. From this concept, Stephen Kaplan derives the notion of “directed attention” (Kaplan, 1995) to substantiate the Attention Restoration Theory (ART) to model the power of an environment to restore one’s focus and concentration. ART consists of four dimensions: being away (geographical and psychological distance from causes of stress), coherence (connection to one’s environment as a whole), soft fascination (effortless attention) and compatibility (reconciliation between a person’s desires and opportunities afforded by their environment) (Kaplan & Talbot, 1983). Because it is both conscious and purposeful, prolonged directed attention causes fatigue and the experience of natural environments ‘restores’ our capacity for directed attention by redirecting ‘involuntary’ attention to the experience of nature.

Numerous studies justify the restorative potential of natural settings, from horticulture activity to garden design to river-rafting (Chen, Tu, & Ho, 2013; Garg, Couture, Ogryzlo, & Schinke, 2010; Pringle & Sowden, 2017), but appraisals of this effect tend to differ across studies. Hartig and colleagues (2003) used blood pressure to measure stress-reduction after walks in urban and natural settings, while Ratcliffe and colleagues (2013) used cognitive and affective appraisals coincident with previous models of perceived restorative potential (PRP) (Berman et al., 2008), but both approaches conclude that cognitive distraction, directed attention, and novelty (‘being away’) are associated with restoration, as well as positive valence and low to moderate arousal.

Bird Sounds and Cognitive Restoration

It appears that natural environments have the power to restore one’s focus and concentration by transporting us, fascinating us, and distracting us from habitual cognitive rumination and stress. But what specific aspect(s) of natural settings are responsible for nature’s restorative potential? There may be many answers to this question (e.g. the aesthetics of a panorama, being outdoors, engaging in an activity such as walking, listening to nature sounds such as the wind or birds, etc.). Visual contributions to an environment’s soothing potential have long been studied. However, the literature lacks insight into auditory contributions to the restorative potential of setting­, a gap that Sowden his students and colleagues seek to bridge.

Ratcliffe, Gatersleben and Sowden (2016) analysed reports of natural sounds which were deemed restorative by participants. They found that 35% of the 186 references to natural sounds included birdsong, followed closely by water (24%) and non-avian animals (18%). Amongst natural sounds, birdsongs are most often reported as having restorative power.

However, not all birds are particularly relaxing to listen to. For example, a raven’s singing is often perceived as threatening (sometimes called screaming rather than singing) while robins’ sounds are almost universally perceived as pleasant. Sowden and colleagues (2016) investigated which characteristics give a birdsong its restorative potential. They presented participants (N=174) with a stressful scenario and exposed them to bird sounds, before asking them to complete a PRP report and to state any associated memories. The highest PRPs were achieved through associations with green spaces, positive animal behaviour (e.g. raising young), certain times and seasons (e.g. morning, springtime) or active behaviour (e.g. walking). It was shown that the restorative power of birdsongs depends mainly on cognitive associations with feelings of soft fascination (R2 = .74) and of being away (R2 = .70) rather than affective associations (arousal). These findings support Kaplan’s Attention Restoration Theory (1995) and the idea that birdsongs restore our capacity for directed attention by redirecting ‘involuntary’ attention to the auditory experience of nature.

Paul’s research has elucidated a definitive yet nuanced connection between nature and psychology and given us an important intellectual portal between inner and outer worlds in both goal-oriented activities such as garden design and passive experiences such as restorative potential and stress reduction. In the same way that natural environments and birdsongs help with restoration by evoking wonder and transcendence and by facilitating a mental space to withdraw from focused attention, the process of interacting with nature in garden design invokes specific cognitive states and processes. It may come as no surprise that sunsets over ocean scenes make their way onto laptop screens (not to mention gurgling brooks inside our alarm clocks) as we learn more about how restorative experience and analytic affective-associative cognitive thought processes influence our minds and moods.

Published by Dwaynica Greaves, Lucas Klein, Tudor Balinisteanu and Agathe Fauchille

*The CAQ is the creative achievement questionnaire developed by Carson et. al. (2003) as a measure of creative achievement in 10 different domains. Creativity is considered proportional to score.

 

Advertisements
Posted in Invited Speaker Series, Uncategorized | Leave a comment

Navigating an ambiguous world

In our daily lives, we are constantly surrounded by or exposed to various sounds, images or speech, many of which feature ambiguity. From our previous experiences with objects our brain interprets it without even consciously thinking about it. Similarly, the sound is a product of our environment and one task of our auditory system is to take the overlapping waves arriving from various sources at our ears as a single mixture and decode, either by integrating or segregating the musical harmony as a model for perception.

As Kris Ernst (1952, p. 259) describes, ambiguity frequently leads to an aesthetic response. Such a response shapes the re-creation of objects at shifting psychic levels or distances.  

Dr. Alex Billig re-introduces the concept of auditory ambiguity as an interest of scientific inquiry. In his view, it’s potentially valuable to use the music as a proponent enabling to examine what really is happening at a level of controlled perception when studying mind and brain.

In Billig et al.’s study (2018), he focused on understanding the shifting psychic levels and what affects how we perceive two different sounds. He did this by looking at the neural activity when exposed to a stimulus from the same source. Once the model of neural signatures is obtained, we can ask to what extent can people influence how they perceive what they hear. Billig was able to demonstrate with the subjective and objective measures that participants were indeed able to exert a significant level of control over hearing integration or segregation of tones.

Researching how intention and experience can act on and shape auditory perception

During his talk, Billig elaborates on the knowledge gained from researching what happens when people manipulate their auditory perception. He mentions the ability of a musical conductor to hone in on different aspects of their orchestra as an example of how people report being able to change their perception at will in order to listen to what they want to hear.

He brings up his experimental research (Billig et al., 2018), where he recruited 23 participants to report on their own perceptions whilst listening to a musical sequence with two tonal streams. He separated them into three listening conditions; a neutral condition, a condition where they were asked to attempt integrated listening and a condition where they were asked to attempt segregated listening. He concluded that people can influence their auditory perception. In addition, in the last condition, adjusting the sequence to low and high tones, he found that increasing the frequency separation makes participants more able to hear the two streams separately.

FullSizeRender

unnamed

Figure 1. A graph demonstrating the percentage of the time participants reported auditory segregation in the three conditions and tone frequencies from Billig et al.’s (2018) study.

However, Billig also mentioned that there could be a bias in the reporting of these perceptions. Participants could report that they perceive different tones when they don’t, in order to please the researcher, or they could give themselves credit for perception changes that would have occurred anyway (for example, looking at an optical illusion for an extended period of time, making someone susceptible to seeing the alternatives).

This is why Billig highlights the importance of research into the neural activity involved. He asks whether people can put themselves in a state where they can manipulate the neural activity that is reflecting the sounds they’re hearing and, in doing so, change their perception of those sounds.

Support for subjective claims

Billig et al.’s 2018 study relied on incorporating objective measures like magnetoencephalography (MEG) neuroimaging techniques with the subjective participant reports. One aspect of the MEG data allowed them to map the activity to locations in bilateral posteromedial Heschl’s gyrus. He further showed that this neural activity phase-locks with the stimuli in the very early stages of processing. These neural correlates provide strong support for many of the subjective claims examined. By supplementing subjective measures with a variety of objective measures, a deeper understanding of auditory ambiguity can be obtained.

Subjective measures can also be combined with other objective behavioural measures to create a mixed methods design. In an experiment about lexical streaming, Billig and colleagues (2013) asked participants to listen to an audio stream and press a button to report if a stream of sounds sounded like one word or two separate sounds. They also included a button pressing task asking participants to identify a gap in sound; the objective measure in this experiment. Interestingly, they found that both the objective and the subjective measures of detecting that gap support the same conclusion. When nonwords are presented, participants segregate more readily but this is more difficult when presented with word stimuli. Subsequently, gap detection in the sounds is better when the participant hears the sound as one, integrated unit rather than as separate sounds. Here mixed methods were very effective in exploring effects such as language on low-level perception.

billig blog figure

Figure 2. Mixed methods approach to investigating lexical streaming from Billig et al. (2013).

Figure A illustrates how the subjective button presses are integrated with the objective deviation in stimuli, serving as the experimental target over time. The deviation occurs in the audio track in several places but as illustrated, listeners detect the difference more readily when they are streaming. Figure B shows the participant set-up for the completion of this task.

What does auditory ambiguity tell us about the mind and brain?

Billig’s talk highlighted the importance of conducting research into auditory ambiguity, given how frequently it occurs in our daily lives. Although occurrences of auditory ambiguity may seem trivial, these events are partly responsible for shaping our perceptions, as indicated by both self-reports and neural activity observations.  

On the surface, auditory ambiguity may seem like a straightforward idea, but dig deeper and you’ll find it’s laced with all sorts of weird and wonderful mechanisms researchers like Billig have only just started to explore.

It should have been difficult to understand such an inherently vague concept, however Billig’s use of well-known examples, such as the Laurel-Yanny phenomenon, helped to illustrate just how ubiquitous auditory ambiguity is in our lives.

While his lecture raised questions as to whether one is in control of auditory ambiguity, Billig acknowledged that our understanding is still in its infancy and that there is still much to be learned about the constraints of ambiguity. He concluded that music provides the ideal stimuli for such research, which is good news for budding music psychologists among you interested in pursuing this line of enquiry!

Time limitations and technology glitches aside, Billig presented a talk that was highly informative and delivered in such a way that made this mind-boggling topic more understandable. Moreover, many found it inspiring to hear from a former MMB student about his career trajectory and current work.

Brainwave-1

Figure 3. Just how much do you control your auditory perception? Image: Jake Fried via thisiscolossal.com

By Marcela Palejova, Melena John, Patrick Smith and Isa Jaward.

 

References

Billig, A. J., Davis, M. H., & Carlyon, R. P. (2018). Neural decoding of bistable sounds reveals an effect of intention on perceptual organization. Journal of Neuroscience (38), 3022-17.

Billig, A. J., Davis, M.H., Deeks, J.M., Jolijn, M., & Carlyon, R.P. (2013). Lexical influences on auditory streaming. Current Biology (23), 1585-1589.

Gutschalk, A., & Dykstra, A.R. (2014). Functional imaging of auditory scene analysis. Hearing Research (307), 98-110.

Kris, E. (1952). Psychoanalytic explorations in art. New York: International Universities Press, Inc.

Posted in Uncategorized | Leave a comment

The Subject(s) at the Centre of Aesthetic Experiences

by Giacomo Bignardi, Kirren Chana, MacKenzie Trupp, & Sasha Koushk-Jalali

Reflections on Edward Vessel’s Introduction to Visual Neuroaesthetics

On 15th November 2018, MSc Music, Mind and Brain and MSc Psychology of the Arts, Neuroaesthetics and Creativity students had the pleasure of having Edward Vessel discuss the nature of visual aesthetic experiences and their neural correlates.

Edward Vessel Neuroaesthetics

Figure 1. Edward Vessel is a neuroscientist studying the neural basis of aesthetic experiences and the neurobiology of information foraging at the Max Planck Institute for Empirical Aesthetics.

Just as philosopher G. Santayana (1995) defines ‘beauty’ as the pleasure evoked by an object, and not the object itself, Vessel studies aesthetic subject matter not based on the particularities of the stimuli but rather on subjective responses to them. Contrary to some philosophers, Vessel believes aesthetics should also be studied from a scientific perspective. According to him, ‘aesthetic appreciation represents a fundamental way of interacting with the world.’

Vessel has not set out to use brain imaging to define what is beautiful nor what art is, but instead stresses a distinction between external objective stimuli and internal subjective factors. Features of our visual world have less importance than the perceiver’s reaction to them. This focus of low-level (visual) vs higher-level (emotionally driven, or semantic) factors presents difficulties for aesthetic scientific research. To address this, Vessel stresses the importance of individual differences in aesthetic experiences, and rejects the use of a simple average as a tool to measure aesthetic preference. By definition, when one averages ratings of liking, one loses information about subjective preferences.

Pairwise Correlation, Mean Minus One, and the Central Role of Meaning

The pairwise cross-correlation distribution is a tool used to measure agreement. A pairwise cross-correlation is calculated using the correlations of ratings of all possible pairs.

cross-observer pairwise correlation
Figure 2. Vertical axis represents the number of pairs (people!). Horizontal axis represents the pairwise correlation values. The vertical line represents the average value for different distributions. The agreement bar on the left is a visual representation of how agreement changes as a function of the distribution. Images were computed in R-studio Version 1.1.419. The images were then post edited.

If the distribution is centred around 0, people had low agreement (i.e. low pairwise correlations), while if its centre is shifted toward higher numbers, we can say that people agree more.

What did Vessel do with this tool? He showed agreement on perceived beauty of images of abstract compositions is lower than those for images of real-world scenes. That is, people tend to agree when rating the perceived beauty of images of sceneries, but disagree for images that are abstract (Vessel & Rubin, 2010).

Individual preferences (Vessel & Rubin, 2010)
Figure 3. Agreement on ratings of abstract images (left) is lower than for images of real-world scenes (right). Source: Vessel & Rubin (2010).

This led Vessel to hypothesise that there is a central role of meaning (semantic) in aesthetic experiences: our experiences are generalisable by the degree to which we have shared semantic interpretations. If people interpret an object in the same way, their aesthetic reaction to it will probably be similar. That is, according to Vessel:

“Shared semantics leads to shared preferences”

Another powerful statistical tool that Vessel utilised to measure agreement, and once more the role of meaning, is Mean Minus One (MM1). This specifically measures shared and unique variance amongst participants’ ratings, and quantifies this to a value between 0 and 1. This is indicative of how much people agree between themselves, with 1 being a perfect match (100% of the variance is shared) and 0 being no agreement (0% of the variance is shared). Imagine collecting n x m ratings, where n = number of subjects and m = number of objects. This is roughly how MM1 is computed:

Mean Minus One (MM1)
Figure 4. Every column represents one participant, while every row represents one object. The numbers represent the measure of interests (ranging from 1 to 7 on a Likert scale measurement). Steps to compute MM1: 1) Compute the mean of the scores for object 1(first row) for every subject except for subject one (in first column) and iterate the process for m objects (row); 2) Compute the correlation between the ratings of subject 1 and the computed means; 3) Iterate the process, but excluding one new subject every time (e.g. Subject 2, 3… n); 4) Convert the correlation scores to z scores; 5) Compute the average of the z scores; 6) Convert z – to – r again.

Neuroaesthetics: from measure to brain correlates of subjective Aesthetic experiences

Vessel has stated that, broadly speaking, aesthetic experiences have higher-level (semantic) and lower-level (purely visual) aspects, with one level perhaps contributing more towards people’s overall reactions than the other. If this is the case, then are there measurable neural correlates associated with one or both of these states?

Functional Magnetic Resonance Imaging (fMRI) is a technique widely used to research neural correlates in neuroscience. This methodology allows neuroscientists to establish a neurological correlation of activated brain area and a specified task (such as aesthetic rating). fMRI studies suggest beauty, regardless of its modality source, is processed by the medial prefrontal cortex (mPFC) (Kawabata & Zeki, 2004; Isizhu & Zeki, 2011). However, Vessel does not seem convinced. Firstly, the mPFC processes several experiences beyond beauty, such as subjective value (Kable & Glimcher, 2007) and social cognition (Amodio & Frith, 2006), suggesting the mPFC may not be specific to processing beauty. Moreover, looking specifically at perceived beauty might narrow the broader question regarding aesthetic experiences.

Thus, Vessel shifted the focus from ‘how beautiful‘ to ‘how moving‘ images were with the following instruction:

“…Respond on the basis of how much this images ‘moves’ you… what works you find powerful, pleasing, or profound”

(Vessel, Starr & Rubin, 2012, p.3)
Figure 5. The Nightmare, 1781. Henry Fuseli. Hindola Raga, c. (1790–1800), Pahari Hills, Kangra school. An Ecclesiastic, c. 1874. Mariano José Maria Bernardo Fortuny y Carbo Constant, c. 1988. Valerie Jaudon.

Participants were instructed to translate the degree to which they were “moved” on a scale from 1 – 4 while in the fMRI scanner. Stimuli were photographs of paintings from a variety of artists, allowing individual preferences to emerge (we know this because MM1 results show low agreement across participants).

Occipito-temporal regions of the brain were found to be linearly related to ratings (more ‘moved by’ = more brain activation). Additionally, a network of anterior brain regions were activated only by images considered most ‘moving’ (rated as 4). Some of these specific locations are important hubs in the Default Mode Network (DMN) (Vessel, Starr & Rubin, 2012; 2013).

Art reaches within: aesthetic experience, the self and the default mode network (Vessel, E., Star, G., G & Rubin, N., 2013)
Figure 6. “Distinct patterns of response to artworks as a function of their ratings in a distributed network of brain regions” (Vessel, Starr, & Rubin, 2013). Image links to the paper.

Occipito-temporal regions in the brain process external stimuli, such as visual stimuli. Anterior regions process higher cognitive function, such as internally generated thoughts and emotions. The DMN is more active during non-task periods, and it is one of the brain networks associated with internally-oriented cognition (click here to read more about the DMN). When we are ‘moved by’ art, the network that processes external stimuli (Occipito-temporal) is engaged simultaneously with the network that processes internally oriented events (anterior/DMN). Vessel suggests that intense “aesthetic experience involve(s) the integration of sensory and emotional reactions in a manner linked with … personal relevance” (Vessel et al., 2012, p.1). Thus, it seems that ‘being moved’, which is an access point to aesthetic experiences, might be categorically different from other experiences.

For Vessel, this is only the beginning, as studying the aesthetic experiences of the subject rather than the subject matter will give a central role to meaning. Aesthetic experiences, aside from beauty alone, seem to constitute moments where the external world is meaningfully integrated by the internal one, leaving the individual moved.

35148f_8691ccb672274ecc85fdbd9f38b14700~mv2.gifFigure 7. Aesthetic Experiences = Boom. It seems that after a certain threshold is reached, aesthetic magic happens. Image: fuse* – Multiverse”. Courtesy of FUSE.

References

An Ecclesiastic, c. 1874. Mariano José Maria Bernardo Fortuny y Carbo Retrieved from https://art.thewalters.org/detail/37641/an-ecclesiastic/

Amodio, D. M., & Frith, C. D. (2006). Meeting of minds: the medial frontal cortex and social cognition. Nature reviews neuroscience, 7(4), 268.

Hindola Raga, c. (1790–1800), Pahari Hills, Kangra school Retrieved from https://www.pinterest.it/clemuseumart/

Ishizu, T., & Zeki, S. (2011). Toward a brain-based theory of beauty. PloS one, 6(7), e21852.

Kable, J. W., & Glimcher, P. W. (2007). The neural correlates of subjective value during intertemporal choice. Nature neuroscience, 10(12), 1625.

Kawabata, H., & Zeki, S. (2004). Neural correlates of beauty. Journal of neurophysiology, 91(4), 1699-1705.

Mariano José Maria Bernardo Fortuny y Carbo Constant, c. 1988. Valerie Jaudon. Retrieved from https://www.google.co.uk/urlsa=i&source=images&cd=&ved=2ahUKEwiz8v_RoYnfAhVIlxoKHQaVBzkQjhx6BAgBEAM&url=https%3A%2F%2Fwww.pinterest.com%2Fpin%2F455848793507662652%2F&psig=AOvVaw1Okgus2wSv1zdWcuXvlyfr&ust=1544117750947388

Multiverse. (2018). [Gif]. Retrieved from https://www.fuseworks.it/en/works/multiverse/

Santayana, G. (1995). The sense of beauty: Being the Outlines of an Aesthetic Theory. New York: Dover (Original work published 1896).

The Nightmare, 1781. Henry Fuseli Retrieved from https://en.wikipedia.org/wiki/The_Nightmare#/media/File:John_Henry_Fuseli_-_The_Nightmare.jpg

Vessel, E. A. (2018). [Photograph]. Retrieved from https://www.gold.ac.uk/calendar/?id=11846

Vessel, E. A., & Rubin, N. (2010). Beauty and the beholder: highly individual taste for abstract, but not real-world images. Journal of vision, 10(2), 18-18.

Vessel, E. A., Starr, G. G., & Rubin, N. (2012). The brain on art: intense aesthetic experience activates the default mode network. Frontiers in human neuroscience, 6, 66.

Vessel, E. A., Starr, G. G., & Rubin, N. (2013). Art reaches within: aesthetic experience, the self and the default mode network. Frontiers in Neuroscience, 7, 258.

Zabelina, D. L., & Andrews-Hanna, J. R. (2016). Dynamic network interactions supporting internally-oriented cognition. Current opinion in neurobiology, 40, 86-93.

 

Posted in Uncategorized | Leave a comment

How the brain entrains to musical rhythms.

‘How the brain entrains to musical rhythms’ was asked by our guest lecturer, Anna Katharina Bauer. To figure this out, we must first explore the concept of oscillation. Anna defined an oscillation as: ‘any system which uses periodic fluctuations between two states’ (see, Pikovsky et al., 2003). It could be like a pendulum of a clock, or a spring, or rockers at a concert jumping up and down. An oscillation has 3 key parameters, which are:

  • Amplitude (A): referring to the magnitude of the oscillation.
  • Frequency (f): referring to the number of cycles per unit of time (most of the time it’s in seconds).
  • Phase (Φ): referring to any point on this trajectory can described a phase.

Then, ‘What is a neural oscillation?’ Neural oscillation is very similar to an oscillation which is defined by amplitude, frequency and the phase. That is, ‘neural oscillations reflect periodic fluctuations in neural activity between high and low excitability states (Buzsaki & Draguhn, 2004).’

How do neural oscillations synchronize to external rhythms?

In 1665, a Dutch physicist, Huygens, observed two clocks which were very close together. After a while, he noticed that the pendulums started swinging in synchrony. After moving the clocks apart, they did not synchronize, as they had no natural influences. This explains how the neural oscillations synchronize to external rhythms – entrainment. Three requirements of this effect include:

  • Involvement of a self-sustained oscillator
  • Rhythmic stimulation
  • Synchronization

1

Source: brilliant.org

Entrainment has been observed through many biological systems such as the dancing of the fireflies, and the synchronized chirping of crickets. In humans, we have different kinds of entrainment. According to Dr. Bauer, body entrainment such as dancing with the music, synchronize to our breathing, and the most remarkable one is that our brain can also synchronize. Interestingly, it can be modulated through attentional mechanisms or temporal expectations – this will be explored further later.

How can we measure entrainment?

Once Anna established what entrainment was, she explored the concept of neural entrainment. During her PhD, Anna conducted two experiments which investigated the synchronization of neural oscillations to an external rhythmic stimulation by a phase alignment – neural entrainment. The first focused on temporal dynamics – where the evolution of neural entrainment was characterized through behavioral modulation and recorded EEG. Here, the auditory stimulations were long and short continuous tones, each with a slight gap (10-20ms) inserted at certain phases (see Henry and Obleser, 2012). The first image below indicates the behavior of individual participants as they were required to press a button when they heard the gap.

2

Source: Baeur et al. (2018)

As you can see, the participants were relatively accurate as their button-pressing closely followed the stimulation in a sinusoidal pattern. Interestingly, the participants were more accurate in the long condition – indicating that the more time they had to entrain, the greater the entrainment effects were. The image below indicates the neural activity in the frequency domain as evidence for neural entrainment.

3

Source: Bauer et al. (2018)

Evidently, there are spectral peaks in amplitude at 3Hz (stimulation frequency) and 6Hz (harmonic). This is further supported in the EEG topography images where we can see a frontocentral activation which is most likely projected from the auditory cortex. This indicates a solid measure of neural activity as evidence for neural entrainment. Inter-trial phase coherence within the time-frequency domain also evidences neural entrainment.

The second measure is called phase consistency, which is simply a measure of how consistent neural oscillations are along with the stimulation the brain has been entrained to. Using the same experimental paradigm Anna measured in phase in values between 0 (random phase) and 1 (perfect phase synchronization). She found that, within one second of stimulation, phase synchronization occurs. In other words, it only takes up to a second for the brain to entrain to a tone oscillation.

The underlying idea of synchronization is highly related to the anticipation mechanism. Once the subject has learned a rhythm, it is because they are able to anticipate the moment they have to press the button, that synchronization with the tone’s gap happens.

Again, the same as happened with accuracy in the long condition, the phase of the subjects’ neural oscillations would align faster with the phase of the tone in the long condition.

Her second experiment focused on cross-modal entrainment – where two types of stimuli used in entrainment both individually and combined. These two stimuli were auditory and visual, and this model was aimed to answer the following question:

Does visual rhythmic stimulation enhance auditory cortex activity and behavioural performance?

Here Anna focused on cross-modal entrainment using a similar experimental paradigm as in the first study but using magnetoencephalography (MEG) in addition. Participants were just required to detect the gaps inserted in different phases of a 3 Hz pulsating circle (visual-only condition), a 3 Hz frequency-modulated tone (auditory-only condition) and a cross-modal entrainment visual-auditory condition.

Accuracy in synchronizing under the visual-auditory condition was significantly higher than in the auditory-only condition. Both EEG and MEG at 3Hz (stimulation frequency) and 6Hz (harmonic) consistently showed auditory cortex activations during auditory-only condition and occipital activations during the visual-only condition. Most interestingly, MEG showed 3 Hz neural activation in auditory cortices during visual stimulation even in the absence of auditory stimuli (Figure 3). Taken together, these results provide clear evidence of neural entrainment in both visual and auditory modalities and that a cross-modal auditory-visual entrainment occurs at both behavioral and neural level.

Fig. 3

4.png

Source: Bauer et al. (2018)

What does all this mean?

The interest in the fascinating and universal phenomenon of entrainment has increased among researchers. Interestingly, entrainment appears to pave the way to prediction which is of great adaptive value because successes or failures in prediction are associated with significant psychological and physiological consequences (Clark, 2013; Merker, 2015). Particularly interesting is that auditory entrainment appears to be fundamental in language development (Pammer, 2014) and rhythmic entrainment constitutes the most distinctive musical behavior which is very rare in other species (Merker, 2015; Patel, 2014). Several clinical implications of rhythm entrainment also arise for clinical contexts as a relevant working ingredient of music therapy in the context of dyslexia, gait rehabilitation of stroke patients and other motor problems such as those found in Parkinson’s disease, autism, etc. (Pammer, 2014; Thaut et al., 2015).

The research presented by Ana proposes a novel and exciting approach to music psychology. We look forward to hearing her new discoveries in her post-doc studies.

Stella Sun, Kirsty Hawkins, Beatriz Matt Martin and Paulo Andrade.

References

Bauer, A. R., Bleichner, M. G.., Jaeger, M., Thorne, J. D., & Debener, S. (2018). Dynamic phase alignment of ongoing auditory cortex oscillations. Neuroimage, 167, 396-407

Calderone, D. J., Lakatos, P., Butler, P. D., & Castellanos, F. X. (2014). Entrainment of neural oscillations as a modifiable substrate of attention. Trends in cognitive sciences, 18(6), 300-309.

Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(03), 181-204.

Clocks image: https://brilliant.org/practice/huygens-clock-puzzle/?chapter=intro Retrieved on 24th December 2018.

Henry, M. J., & Obleser, J. (2012). Frequency modulation entrains slow neural oscillations and optimizes human listening behavior. PNAS, 109(49), 20095-20100.

Merker, B., Morley, I., & Zuidema, W. (2015). Five fundamental constraints on theories of the origins of music. Philosophical Transactions of the Royal Society B: Biological Sciences370(1664), 20140095.

Pammer, K. (2014). Temporal sampling in vision and the implications for dyslexia. Frontiers in human neuroscience, 7, 933.

Pikovsky, A., Rosenblum, M., & Kurths, J. (2003). Synchronization: A Universal Concept in Non-linear Sciences. Cambridge University Press, United Kingdom.

Thaut, M. H., McIntosh, G. C., & Hoemberg, V. (2015). Neurobiological foundations of neurologic music therapy: rhythmic entrainment and the motor system. Frontiers in psychology, 5, 1185.

Posted in Uncategorized | Leave a comment

Jacques Launay: Is Music an Evolutionary Adaptation for Social Bonding?

“Without a song or a dance what are we?

So I say thank you for the music

For giving it to me”

-Lyrics by ABBA-

Music and dance exists across all cultures and anthropological evidence makes it clear that musical instruments existed dating back almost 50,000 years. While we all know how meaningful music is in our lives, the existence of it is puzzling because bobbing our heads to the static beats of techno at a nightclub or singing carols together with the family on Christmas day doesn’t seem to directly benefit our survival. Dr Jacques Launay, who is a lecturer at Brunel University and teaches music psychology, presented a talk at Goldsmiths College about his belief that music exists as an adaptation for social bonding.

In his talk, Launay began by introducing some of the theories regarding the existence of music. One of which is Steven Pinker’s view, a cognitive psychologist, that music doesn’t have a purpose but is only a pleasure mechanism, an auditory drug, and a by-product of language (Pinker, 1997). However, knowing the power of music and what it acquires in us is unique, it’s difficult to agree that music is simply a pleasure technology. A popular theory that opposes Pinker is adaptationists’ view that music served a purpose in human evolution. Robin Dunbar (2003), who Launay worked with during his postdoctoral at Oxford University, suggests that for our ancestor tribes, music-making and dancing with other group members would have allowed the group to better socially bond and solve internal conflicts. As a result of these group benefits, the tribes would have grown stronger to outrun other competing tribes or protect against predator animals. Launay pointed out that a number of studies including his own support Dunbar’s theory, where it has been shown that playing or dancing to music as a group elicits social cohesion (Launay, Dean, & Bailes, 2013; Tarr, Launay, & Dunbar, 2014) and self-other likeability.

He continued by suggesting that several elements of music-making are socially bonding. These elements include low-level aspects of music-making, like shared intentionality (Reddish, Fischer, & Bulbulia, 2013), and shared task success (Launay, Dean, & Bailes, 2013). In fact, experimental evidence suggests that even just attending to the same stimulus as another person may be sufficient to encourage social bonding (Wolf, Launay, & Dunbar, 2016). In this experiment, two individuals engaged in a reaction time task. Launay explained that working from the same side of the screen with shared attention equated to higher ratings of social-bonding to their experiment partner. If participants worked individually, on different sides of the screen, the shared motivation test didn’t affect social-bonding (Wolf et al., 2016).

Launay next discussed the role of synchronisation in encouraging social bonding. When participants synchronise, they co-operate and bond socially more than when they are asynchronous (Hove & Risen, 2009; Reddish et al., 2013; Wiltermuth & Heath, 2009). The effect of synchrony on bonding can even be seen when participants synchronise with a fake virtual partner, with higher ratings of likeability and trust after synchronising (Launay et al., 2013; Launay, Dean, & Bailes, 2014).

Perhaps more relevant to music-making, evidence suggests these effects are also found with dance. When dancing to either the same or different music (at differing tempos), participants dancing to the same music show enhanced memory for dancer attributes (Woolhouse, Tidhar, & Cross, 2016). This effect is not just an effect of exertion – Launay mentioned a paper he co-authored, where synchrony and exertion independently raised prosociability (Tarr, Launay, Cohen, & Dunbar, 2015). It is evident that dancing in synchrony, or at least dancing together in time, is linked with social bonding (Tarr, Launay, & Dunbar, 2016; von Zimmermann, Vicary, Sperling, Orgs, & Richardson, 2018).

m2ye-silentdisco.co.uk(Photo: https://m2ye-silentdisco.co.uk, 2018).

Additionally, Launay and colleagues conducted an experiment in Brazil, in which high school students were taught dance moves with varying levels of exertion. The students were then either instructed to dance in full or partial synchrony, depending upon visual and verbal cues given by the researchers. Results of the experiment showed an increase in pro-sociality ratings for both the fully synchronous and the partially synchronous groups (Tarr et al., 2015).

Also, music can affect social bonding over time. Launay explained a 6 month study that examined music and social bonding during three separate time periods, 2, 9, and 21 weeks (Pearce, Launay & Dunbar, 2015). The purpose of the study was to determine whether a singing class created social bonds more quickly than other activities such as crafts or creative writing. The results of the study support the theory that music is socially bonding, as after all three time periods, the musical classmates reported feeling significantly closer after the classes than they felt to them before.

One last experiment described by Launay aimed to determine whether the social bonding effects of music can be experienced on a larger scale. For this study, participants from a community choir that practiced and performed in both small groups, as well as a composite choir, provided self-report measures of social bonding before and after their small (20-80 members) and large-scale (all 232 members) rehearsal (Weinstein, Launay, Pearce, Dunbar & Stewart, 2015). The self-report scores of social bonding between participants went up in both conditions. The increased score for the large choir condition is particularly interesting, as choir members were singing with people they didn’t know, yet felt bonded to after a rehearsal and performance. This supports the hypothesis that the effects of music on social bonding can be also experienced on a larger scale.

PopChoir(Photo: https://popchoir.com/news/cone-and-try-us-out-in-the-autumn-term, 2018).

Launay highlights experimental evidence suggesting that synchronisation can influence social bonding, which is seen to come from low-level aspects like shared attention, intentions and accomplishing a same goal. Other areas show that more exerted movements can also create more impact in social closeness, where likeability rises through dancing with bigger movements and when in synchrony; and is thought to be the most socially bonding (Tarr et al., 2015).

Some caveats for the adaptive purposes of music would be how music is used across cultures. A suggestion Launay makes is that music has evolved as a pre-linguistic form of communication as a primal way of bonding. This can still be seen, for example, through lullabies, in mother/infant relationships, where language isn’t as effective as music in communicating with each other, or in large groups such as festivals or silent discos (Tarr, Launay and Dunbar, 2016).

Musical preference influences people’s perception of how close they are to others and is a predictor of how much more likeable a person will be found based on their musical tastes (for example, if they have the same preference for music, they are more likely to think that they are going to connect better). Launay finishes his talk by saying how shared traits such as coming from the same area or having the same religion have been compared to analyse social bonding, but music is thought to hold a much bigger influence than any of these (Launay and Dunbar, 2015).

This insightful presentation demonstrates how powerful music is; it’s a tool that we use everyday of our lives to communicate with others throughout many cultures across the world. Music really is a unique and adaptive invention that cements our togetherness, allowing us to share moments and build relationships with others.

Quotefancy-2399244-3840x2160(Photo:https://quotefancy.com/quote/878927/Oliver-Sacks-Music-has-a-bonding-power-it-s-primal-social-cement, 2018).

Harin Lee, Dianna Vidas, Heather Thueringer and Kerry Schofield.

References

Hove, M. J., & Risen, J. L. (2009). It’s All in the Timing: Interpersonal Synchrony Increases Affiliation. Social Cognition, 27(6), 949–960. https://doi.org/10.1521/soco.2009.27.6.949

Launay, J., Dean, R. T., & Bailes, F. (2013). Synchronization can influence trust following virtual interaction. Experimental Psychology, 60(1), 53–63. https://doi.org/10.1027/1618-3169/a000173

Launay, J., Dean, R. T., & Bailes, F. (2013). Synchronization can influence trust following virtual interaction. Experimental Psychology, 60(1), 53–63. https://doi.org/10.1027/1618-3169/a000173

Launay, J., Dean, R. T., & Bailes, F. (2014). Synchronising movements with the sounds of a virtual partner enhances partner likeability. Cognitive Processing, 15(4), 491–501. https://doi.org/10.1007/s10339-014-0618-0

Launay, J., & Dunbar, R. I. M. (2015). Playing with Strangers: Which Shared Traits Attract Us Most to New People? PLoS ONE, 10(6), e0129688. https://doi.org/10.1371/journal.pone.0129688

Pearce, E., Launay, J., & Dunbar, R. I. (2015). The ice-breaker effect: singing mediates fast social bonding. Royal Society Open Science, 2(10), 150221. doi:10.1098/rsos.150221

Pinker, S. (1997). How the mind works. New York, NY: Norton.

Popchoir. (2018). [photograph ]. Retrieved from https://popchoir.com/news/cone-and-try-us-out-in-the-autumn-term

Quotefancy. (2018). [Photograph]. Retrieved from https://quotefancy.com/quote/878927/Oliver-Sacks-Music-has-a-bonding-power-it-s-primal-social-cement

Reddish, P., Fischer, R., & Bulbulia, J. (2013). Let’s Dance Together: Synchrony, Shared Intentionality and Cooperation. PLoS ONE, 8(8). https://doi.org/10.1371/journal.pone.0071182

Silentdisco. (2018). [Photograph]. Retrieved from https://m2ye-silentdisco.co.uk

Tarr, B., Launay, J., Cohen, E., & Dunbar, R. (2015). Synchrony and exertion during dance independently raise pain threshold and encourage social bonding. Biology Letters 11, 20150767. https://dx.doi.org/10.1098/rsbl.2015.0767

Tarr, B., Launay, J., & Dunbar, R. I. M. (2016). Silent disco: dancing in synchrony leads to elevated pain thresholds and social closeness. Evolution and Human Behavior, 37(5), 343–349. https://doi.org/10.1016/j.evolhumbehav.2016.02.004

von Zimmermann, J., Vicary, S., Sperling, M., Orgs, G., & Richardson, D. C. (2018). The Choreography of Group Affiliation. Topics in Cognitive Science, 10, 80–94. https://doi.org/10.1111/tops.12320

Weinstein, D., Launay, J., Pearce, E., Dunbar, R. I., & Stewart, L. (2016). Singing and social bonding: changes in connectivity and pain threshold as a function of group size. Evolution and Human Behavior, 37(2), 152-158. doi:10.1016/j.evolhumbehav.2015.10.002

Wiltermuth, S. S., & Heath, C. (2009). Synchrony and Cooperation. Psychological Science, 20(1), 1–5.

Wolf, W., Launay, J., & Dunbar, R. I. M. (2016). Joint attention, shared goals, and social bonding. British Journal of Psychology, 107(2), 322–337. https://doi.org/10.1111/bjop.12144

Woolhouse, M. H., Tidhar, D., & Cross, I. (2016). Effects on Inter-Personal Memory of Dancing in Time with Others. Frontiers in Psychology, 7(February), 1–8. doi: 10.3389/fpsyg.2016.00167

Posted in Invited Speaker Series, Uncategorized | Tagged , , , , , | Leave a comment

The Jazz Turnaround: A Back-to-Back Paradigm for Studying Improvisation

Musical improvisation involves extremely complex cognitive processes—with performers engaging in rapid, in-the-moment decision-making coupled with focussed motor attention, all-the-while maintaining awareness of the other musicians and the music. It’s no wonder that it captivates the interest of Dr Freya Bailes, a music researcher at the University of Leeds, who presented a talk on the cognitive mechanisms involved in improvisation at Goldsmiths College on the 1st of February 2018.

Who’s leading who?

One aspect of improvisation that may not automatically spring to mind is leadership. Traditionally, within certain types of improvisation such as jazz music, leadership may arise from a conductor or more likely, the lead-player within the ensemble. These individuals show where the music is going (rhythmically, harmonically, and dynamically) through subtle changes in their playing and (often fantastically cryptic!) gestures and visual cues. But how does leadership occur for two people improvising freely?

This was one of the core focuses for Bailes and Dean in a recent study investigating cognitive processes in improvisation. The researchers paired professional pianists, instructing them to perform six, three-minute improvisations. However, there was a catch! The performers had to play back-to-back at separate MIDI pianos, rendering them unable to use visual cues while improvising—Bailes wanted exchange of auditory information only.

UntitledBack-to-back pianists—the set-up used in Bailes’ experiment. (Source)

Limited directions for the format of the six improvisations were given. Two of the improvisations were labelled as completely free, one had a dynamic structure (quiet, loud, quiet), one was to be centred around a pulse, one was instructed to be led by Performer 1, and the last was to be led by Performer 2. The researchers were interested in how responses to each other’s playing would influence the development of leadership roles between the two performers. In addition, they were curious to discover the performers’ perceived leadership roles, i.e. who each performer believed was leading the improvisation at different points in the piece. Thirty minutes after performing, the performers listened back to their improvisations and rated who they felt influenced the music most in each section. They found that aural cues alone were sufficient for performers to identify who was taking the lead.

Sweat for science

Bailes and Dean aimed to probe both conscious and unconscious measures of the performers’ experiences. Therefore, in addition to the conscious measure (leadership rating), they employed an unconscious measure by recording the physiological arousal of performers while improvising. Arousal is considered a component of the emotional response triggered by listening to music (Khalfa et al. 2002; Rickard, 2004). They measured arousal by recording changes in skin conductance (SC)—a type of electrodermal activity caused by variations in the sweat glands, controlled unconsciously by the sympathetic nervous system (Khalfa, Isabelle, Jean-Pierre, & Manon, 2002).

Untitled.png SC.pngSkin conductance (SC) is captured via skin electrodes placed on the fingertips or, when confronted with jazz pianists, on the left ankle. (Source)

SC is often measured on the fingertips—a bit problematic for pianists! Instead, Bailes and Dean measured SC on the pianist’s left ankle, as the performers were able to keep that part of their body still (the right ankle was left free for pedalling). Interestingly, it was previously hypothesised that SC might increase more during transitions in the improvisations, as those points in the music require increased attention and effort to develop a new pattern in the music (Dean & Bailes, 2016). An analysis of a case study for one duo found that SC typically did increase during transitions, e.g. when a new dynamic section began. One player’s SC matched the musical structure of the improvisations, while the other player had an overall greater variability in SC but did not always follow the shape of the music. In general, improvisation could intensify a performer’s arousal state by focussing attention on the moment-by-moment decision making, and awareness and reaction to the other performers’ actions. That’s a lot to think about at once!

How about you, the audience?

In addition to obtaining leadership perception from the performers, Bailes also investigated listeners’ (specifically non-musicians) perceptions on the leadership roles during the improvisation. It seems that Bailes enjoys tackling difficult topics as she herself described ‘perception of leadership in improvisation’ as an impossible task for non-musicians! It was the researchers’ turn to improvise, as they devised an alternative approach—to ask an open question to the non-musician listeners: “Indicate where any significant changes in sound occur within the improvised piece of music”. The piece they listened to was taken directly from the recordings of the professional musicians used for their study on leadership roles (Dean & Bailes, 2016). The questions asked were left deliberately open to interpretation as they didn’t want to bias any perceptions. Participants were asked to listen to the piece at a computer and move the mouse to indicate change. Large changes in music were to be indicated by faster mouse movements, and smaller changes by slower movements.

In addition to this, non-musicians were asked to report the level of perceived arousal expressed during the piece of music. By moving the mouse along a scale (moving up the scale = higher arousal), participants mapped out the level of arousal over the time-course of the music. Here, the team were interested in whether outside-listeners were sensitive to the physiological arousal of the performers. By comparing the outside-listeners’ perceived arousal with the SC of the performers over the time course of the music, Bailes and Dean were able to analyse the data for any correlations that may support their hypotheses.

Bailes and Dean developed a couple of interesting hypotheses concerning outside-listeners. Firstly, they predicted the perceptions of the outside-listeners would align with the performers’ perceptions of changed leadership, however, this was not the case. Instead, the case study analysis of one duo revealed that the listeners’ perception of changes in sound aligned with the computational segmentation of each pianist’s performed key velocity. Their second hypothesis was that the outside-listeners’ perception of arousal would align with the performers’ level of physiological arousal, as measured by their SC level, over the time-course of the music. Interestingly, a mixed result was found regarding their second hypothesis. In the same case study, Performer 2’s skin conductance correlated with the listeners perceptions of arousal, and yet in the same piece of music, performer one’s skin conductance did not align with the listeners perceptions. Bailes suggests that perhaps individual differences in SC (Performer 1 was more prone to sweating!) may have weakened the link between perceptions and physiological measures of arousal.

Untitled.png arousalDiagram illustrating the levels of arousal measured in the experiment (Bailes, 2018)

The research presented by Bailes and Dean display some interesting details. Their research looked at a range of intriguing questions regarding improvisation, from leadership roles to arousal and the interactions between performers’ perception and physiological changes and non-musician’s perceptions. Bailes and Dean’s research suggests that when two performers are playing together, aural cues alone are enough to allow performers to agree on who was leading the music at any given point. However, their case study also found no evidence to support some of the hypotheses proposed, potentially highlighting the intricacy of investigating such concepts. It seems that when you’re fascinated by researching impossible tasks, you can’t always expect straightforward results—but that’s all part of the fun.

Nicholas Feasey, Taylor Liptak, and Alex Lascelles

References

Bailes, F. (2018). Cognitive processes in improvisation [Powerpoint slides]. Retrieved from https://learn.gold.ac.uk/course/view.php?id=8048

Dean, R. T., & Bailes, F. (2016). Relationships between generated musical structure, performers’ physiological arousal and listener perceptions in solo piano improvisation. Journal of New Music Research, 45(4), 361-374.

Khalfa, S., Isabelle, P., Jean-Pierre, B., & Manon, R. (2002). Event-related skin conductance responses to musical emotions in humans. Neuroscience letters, 328(2), 145-149.

Rickard, N. S. (2004). Intense emotional responses to music: a test of the physiological arousal hypothesis. Psychology of music, 32(4), 371-388.

Rowe, M. (2011, May 13). Jazz Code. [Web log post]. Retrieved January 20, 2018, from http://jazzbackstory.blogspot.co.uk/2011/05/jazz-code.html

 

Posted in Uncategorized | Leave a comment

“The seductiveness of music lies in its ability to titillate the senses”: Elaine Chew on musical structure

Think about the last time a piece of music took you by surprise… What triggered it? How did you feel? Did others react the same way? You might become aware of musical structure through an established rhythmic pattern or a subverted harmonic expectation. It exists at various levels within a piece of music, from short motifs, through to longer patterns.

Structure is an integral music component and indeed, music is often described as  “organised sound” (Varèse, cited in Goldman, 1961). Composers conceive and organise structure, performers express it, and listeners decipher it. Differences in our perception of these structures will dictate our expectation of the forthcoming music and alter our individual experiences of music.

So how can we make sense of our musical experiences by analysing and quantifying structure? Elaine Chew is a self-described “mathemusical scientist” (Chew, 2016, p. 37). Her research on musical structure spans conceptual art through to mathematical modelling and gives new insights into music perception. She spoke to Goldsmiths’ Music, Mind, and Brain MSc students about the perception and apperception of musical structure.


“When practise becomes performance”
(Chew & Child, 2014)
Sight reading as a means of structural insight

The process of sight-reading requires an array of neurological and motor functions, including “perception (de-coding note patterns), kinesthetics (executing motor programs), memory (recognising patterns) and problem-solving skills (improvising and guessing)” (Parncutt & McPhereson, 2002, p. 78). This reliance on pattern decoding means that sight-reading could provide insight into a performer’s initial comprehension of musical structure.

Pic 1

Source: Chew, 2013 (click for enlarged image)

Prior to the nineteenth century, public performances of music usually consisted of scores being performed at first sight, without practice (Parncutt & McPhereson, 2002). Nowadays, music is usually painstakingly rehearsed beforehand. In 2013, Elaine Chew worked with composer Peter Child and conceptual artist Lina Viste Grønli to challenge our expectations of performance. After a visit to the Berlin Philharmonic, Viste Grønli found her thoughts fixated on the musicians’ warming-up “performance” and began questioning how these chaotic, unplanned sounds could be captured. What followed was Practising Haydn. Chew was recorded practising Haydn’s Piano Sonota in E Flat, and the session was then meticulously transcribed to create a new score and publicly performed.

Screen Shot 2018-04-10 at 15.48.12

Source: Chew, 2013

The transcribed practise session and its comparison to Haydn’s original score leaves a fascinating trace of the cognitive processing of musical structure. Childs’ score is full of metrical changes, repetitions, pauses, and interruptions – quite unlike anything you’d expect in a piece of Haydn’s music. These alterations mark structural points at which musical patterns and expectations are subverted. And this process is an example of one type of conceptual analysis into structure through the composer/performer relationship.


“How music works, why music works and […] how to make music work”
(Chew, 2016, p 38)
Modelling musical structure and expectancy

In order to better understand the processes that leads the composer and listener to create and perceive musical boundaries, it is important to develop mathematical models of music cognition that describe variations of musical expectancy and tension (Huron, 2006). Tension can be induced by both tonal and temporal patterns. Also, the musical properties that comprise such patterns are multidimensional and dynamic (Herremans & Chew, 2016), which means they can be difficult to model accurately.

Chew argues that important parallels can be drawn between our understanding of the physical world and our experience of musical structure (Chew, 2016). As she explains, people can imagine and describe forms of physical movement with ease: What does it feel like to march in a muddy swamp? How vividly can you remember your first time accelerating down a ski slope or on a rollercoaster? Can you picture the swooping sensation of a falcon changing its course in flight? For most people, these thought experiments are intuitive. Composers can, therefore, draw from our common knowledge of the physical world to design equally vivid musical gestures.

More importantly, concepts from physics can constitute a reliable framework to describe musical structure. In the same way that physicists use mathematics to model physical phenomena, mathemusical scientists can describe the musical world in mathematical terms. They can use mathematical modelling techniques from physics to develop more accurate mathematical models of music perception: Chew uses the concept of gravity and the properties of Newtonian mechanics to model the dynamics of tonal tension and the effect of musical pulse on expectancy.

The spiral array model of tonality is a geometric representation of tonal space, which represents pitch classes, chords and keys, where each pitch class corresponds to spatial coordinates along a helix.

Screen Shot 2018-04-10 at 15.49.56

Source: Chew, 2016, p 44

Newton’s law of gravitation allows us to localise the centre of gravity of a non-uniform object by integrating the weight of all the points in that object. Accordingly, the gravitational pull is concentrated at the center of gravity of a given object and the gravitational force between two objects is inversely proportional to the square of the distance between these objects. In mathematical terms, we have:

Screen Shot 2018-04-10 at 15.51.21

Likewise, the tonic is the centre of gravity of the given tonal context – also defined as the “center of effect”. As the tonic changes within the modulations of the harmonic structure, the centre of effect will change accordingly. Tones that are harmonically distant from the centre of effect induce tension, and tones that are closer to it allow for resolution. However, tonal tension that is distant from the centre of effect is less like gravity and more like an elastic force, which was also defined by Newton. Hence, tones moving apart and towards the tonal centre in time create a musical narrative (see Herremans’ and Chew’s paper on tension ribbons for more information).

Tonality is not the only parameter that shapes our experience of musical tension. Through her Expression Synthesis Project (Chew et al., 2005), Chew also illustrates how timing and beat can affect expectancy, and how this can be modelled according to Newtonian mechanics. The pull is dictated by the musical pulse, and Newton’s three laws of motion are used to operationalise timing, where the time elapsed between two beats are analogous to the distance between two points in space (Chew, 2016).

Chew’s work on the mathematical modelling of musical structure allowed her and her colleague Alexandre Francois to develop MuSA.RT software, which can analyse the tonal structure of a given musical piece and provide a corresponding graphical representation using the spiral array model. In this video, Chew demonstrates how the software responds to musical information:

Another exciting application of Chew’s work is its potential for artificial music composition. MorpheuS is an automatic music generation system developed by Chew and her colleague Dorien Herremans which uses machine-learning techniques and pattern detection algorithms in conjunction with Chew’s tonal tension model to produce novel music in a specific style or in the combination of several styles. For instance, below is a recording of three pieces morphed from A Little Notebook for Anna Magdalena by J. S. Bach and three pieces morphed from 30 and 24 Pieces for Children by Kabalevsky.


“Listening as a creative act”
(Smith, Schankler & Chew, 2014)
Individual differences in structure perception

Whilst music perception and cognition can be tracked to a certain extent, Chew emphasises our individual differences. Musical structure arises from parameters on different layers, which give the listener enough space for interpretation. Attention seems to play a crucial role in shaping perception and untangling ambiguities, which is in turn influenced by the personal listening history and expectations (Smith et al., 2014a). As music listening and making require the integration of diverse musical parameters (Herremans & Chew, 2016), researchers’ predictions of personal experience are limited by diverging musical features we deem relevant.

Perception of musical boundaries, for example, is predictable from novelty peaks, which capture the extent to which different musical features change over time (Smith et al., 2014b). Timbre, harmony, key, rhythm or tempo might be decisive. And again, boundary and novelty annotations by listeners reveal individual deviances across those musical parameters. Therefore, not every novelty peak makes us perceive a structural boundary, as personal attention obscures physical events. Some theories of structure perception such as Lerdahl & Jackendoff’s (1983) Generative Theory of Tonal Music, ascribe gestalt rules to the process, but research suggests that our perceptions vary from person to person (Smith et al., 2014a). When repeatedly exposed to the same musical piece, people even disagree with themselves about structure (Margulis, 2012)!

Structure is often ambiguous – particularly in improvised music – so it’s important to remember that our perception is flexible. In Practising Haydn the transcription process was open to interpretation – how and why did the composer decide which changes warranted formal transcription? This ambiguity of structural boundaries is likely due to the multidimensional complexity of musical patterns and the aggregate nature of the perceptual process.

These projects emphasise the creative nature of listening, the breadth of Chew’s work, and the important role that structure plays in our understanding of music perception and cognition in general. Next time you’re listening to that exciting piece of music, take a minute to remember how complex and unique your experience may be.

Lena Esther Ptasczynski, Fran Board, and Paul Bejjani

References
Chew, E., François, A., Liu, J., Yang, A. (2005). ESP: A Driving Interface for Expression Synthesis. Proceedings of the 2005 International Conference on New Interfaces for Musical Expression (NIME05), Vancouver, BC, Canada, 224-227.
Chew, E. (2013). About practising Haydn. Retrieved January 31, 2018, from http://elainechew-piano.blogspot.co.uk/2013/09/about-practicing-haydn.html
Chew, E., & Child, P. (2014). Multiple Sense Making: When Practice Becomes Performance. Cambridge, UK: Cambridge University.
Chew, E. (2016). Motion and gravitation in the musical spheres (in Mathemusical Conversations: Mathematics and Computation in Music Performance and Composition). (J. Smith, E. Chew, & G. Assayag, Eds.). Singapore: World Scientific Publishing Company.
Goldman, R. F. (1961). Varèse: Ionisation; Density 21.5; Intégrales; Octandre; Hyperprism; Poème Electronique. Instrumentalists, cond. Musical Quarterly, 47(133–134), Robert Craft. Columbia MS 6146 (stereo).
Herremans, D., & Chew, E. (2016). Tension ribbons: Quantifying and visualising tonal tension. Second International Conference on Technologies for Music Notation and Representation, 8–18.
Huron, D. (2006). Sweet Anticipation: Music and the Psychology of Expectation. MIT Press.
Lerdahl, F., & Jackendoff, J. (1983). Generative Theory of Tonal Music. Cambridge, MA: MIT Press.
Margulis, E. H. (2012). Musical Repetition Detection Across Multiple Exposures. Music Perception: An Interdisciplinary Journal, 29(4), 377–385. https://doi.org/10.1525/mp.2012.29.4.377
Parncutt, R., & McPherson, G. E. (2002). The Science and Psychology of Music Performance Creative Strategies for Teaching and Learning. Research Studies in Music Education, 19(1), 78–78. https://doi.org/10.1177/1321103X020190010803
Smith, J., Shankler, I., & Chew, E. (2014a). Listening as a Creative Act: Meaningful Differences in Structural Annotations of Improvised Performances. Society for Music Theory, 20(3). http://www.mtosmt.org/issues/mto.14.20.3/mto.14.20.3.smith_schankler_chew.html.
Smith, J., Chuan, C.-H., Chew, Chew, E. (2014b). Audio properties of perceived boundaries in music. IEEE Transactions on Multimedia. Special Issue on Music Data Mining, 16(5), 1219-1228.
Posted in Uncategorized | Leave a comment