skip to content

Centre for Music and Science

 

The goal of music cognition research is to develop a scientifically rigorous understanding of the psychological processes underlying music perception, appreciation, and production. Unlike neuroscience, which focuses on identifying neural substrates of mental processes, music cognition prioritizes understanding how information flows through the human mind. This includes exploring representations (how musical information is stored) and algorithms (how this information is manipulated).

Music cognition encompasses a wide range of mental processes, such as:

Perception and appreciation

  • Similarity
  • Tonality
  • Memory
  • Emotion
  • Semantic and cross-modal associations
  • Expectation
  • Tension
  • Syntax
  • Rhythm
  • Pleasure and groove
  • Consonance
  • Auditory scene analysis
  • Imagery
  • Composer/performer identification
  • Analysing music
  • Reading music notation

Music production

  • Composition
  • Improvisation
  • Interpreting musical scores
  • Coordinating between musicians
  • Singing
  • Dancing
  • Motor learning

Computational modelling is central to music cognition research, especially at the CMS. The aim is to build computer programs that simulate human cognitive processes, providing explicit and testable predictions. For example, if we are studying similarity perception, we would try to build a computer program that takes as input a pair of musical stimuli and makes a prediction about how similar the human listener will judge them to be. This computational exercise forces us to be explicit about all the details of our cognitive theory, and it forces that theory to make specific predictions that can be tested against empirical data.

Our computational music cognition research projects typically involve several steps. First, we identify a particular cognitive process to focus on (for example, similarity perception). Second, we identify several candidate theories of how the mind might perform that task, and find or write corresponding computer programs that instantiate those theories. Third, we collect one or more psychological datasets that capture how a human performs that task (e.g. by asking participants to rate the similarity of pairs of musical stimuli). These datasets could be compiled from previous research projects, or produced by running a new experiment. Ideally, the datasets are chosen so as to elicit contrasting predictions from the computational models. Fourth, we compare the predictions from the computational models with the observed data, and thereby make inferences about which theories provide the best account of the cognitive process.

A big priority for us is facilitating future research. One way to this is by developing software that compiles together implementations of useful cognitive models, algorithms, or features. The incon package is an example of this, compiling together a very large number of consonance models from the literature. Another way to do this is by creating large behavioural datasets that can support many future research studies, rather than small datasets that are specialised to addressing a particular research question. 

A classic approach to creating general-purpose datasets involves collecting a large set of stimuli (e.g., melodies, timbres, or 5-second audio snippets) and gathering extensive behavioural responses for each (e.g., ratings of emotional attributes or performance on memory tasks). These datasets can then be used to define specific modelling challenges with numerically scoreable outcomes. For instance, the inconData package offers a collection of consonance perception datasets providing behavioural assessments of the consonance of various musical chords. The challenge lies in developing computational models capable of accurately predicting these consonance judgments (see Harrison & Pearce, 2020 and Eerola & Lahdelma, 2021 for examples).

Our preferred method of data collection is online experiments, facilitated by tools like PsyNet. These allow participants to engage remotely, streamlining data collection. However, in-person studies remain essential for studying expert performance, as exemplified by our research on jazz piano trios (Huw Cheston, CMS PhD student) and classical organ performance (Katelyn Emerson, CMS PhD student).

Much of our music cognition work has primarily theoretical motivations: we simply want to understand the musical mind better. However, music cognition research also has many practical applications; in particular, the algorithms that we develop can often be incorporated into software for diverse purposes, such as music generation, music recommendation, and so on.

Notes for prospective students

Prior experience in computer programming is not a prerequisite for pursuing computational music cognition at CMS. However, a strong enthusiasm for computational methods and their potential applications in human psychology is essential. We encourage you to explore these topics and experiment with programming tutorials prior to applying. If accepted into the program, you will have access to a range of learning resources, such as DataCamp subscriptions, to help you further enhance your skills.

If you do not have prior experience in computational modelling, a good strategy is to focus on applying pre-existing models to behavioural datasets. Statistical methods can then be used to relate extracted features (e.g., roughness, harmonicity) to the human data. Here is a collection of relevant toolboxes and models:

  • MIRToolbox (extracting music features from audio)
  • MIDI Toolbox (extracting music features from MIDI)
  • IDyOM (modelling probabilistic expectation)
  • leman2000R (modelling psychoacoustic expectation)
  • incon (modelling consonance perception)
  • voicer (modelling perceptual principles of voice leading)

For a more detailed introduction to possible music cognition research projects, please see A (Prospective) Student Guide to Music Cognition at the CMS.

Example publications

Anglada-Tort, M., Harrison, P. M. C., Lee, H., & Jacoby, N. (2023). Large-scale iterated singing experiments reveal oral transmission mechanisms underlying music evolution. Current Biology: CB, 33(8), 1472-1486.e12. https://doi.org/10.1016/j.cub.2023.02.070

Cheung, V. K. M., Harrison, P. M. C., Meyer, L., Pearce, M. T., Haynes, J.-D., & Koelsch, S. (2019). Uncertainty and surprise jointly predict musical pleasure and amygdala, hippocampus, and auditory cortex activity. Current Biology, 29(23), 4084-4092.e4. https://doi.org/10.1016/j.cub.2019.09.067

Harrison, P. M. C., & MacConnachie, J. M. C. (2024). Consonance in the carillon. The Journal of the Acoustical Society of America, 156(2), 1111–1122. https://doi.org/10.1121/10.0028167

Harrison, P. M. C., & Pearce, M. T. (2020). Simultaneous consonance in music perception and composition. Psychological Review, 127(2), 216–244. https://doi.org/10.1037/rev0000169

Jacoby, N., Polak, R., Grahn, J. A., Cameron, D. J., Lee, K. M., Godoy, R., Undurraga, E. A., Huanca, T., Thalwitzer, T., Doumbia, N., Goldberg, D., Margulis, E. H., Wong, P. C. M., Jure, L., Rocamora, M., Fujii, S., Savage, P. E., Ajimi, J., Konno, R., … McDermott, J. H. (2024). Commonality and variation in mental representations of music revealed by a cross-cultural comparison of rhythm priors in 15 countries. Nature Human Behaviour. https://doi.org/10.1038/s41562-023-01800-9

Marjieh, R., Harrison, P. M. C., Lee, H., Deligiannaki, F., & Jacoby, N. (2024). Timbral effects on consonance disentangle psychoacoustic mechanisms and suggest perceptual origins for musical scales. Nature Communications, 15(1), 1482. https://doi.org/10.1038/s41467-024-45812-z

Wilkie, H., & Harrison, P. M. C. (accepted). Reverberation time and musical emotion in recorded music listening. Music Perception.

 

CMS Logo

Latest news

New paper: Artist identification using rhythm via machine learning

21 October 2024

We are excited to share our new paper appearing in the journal Royal Society Open Science, entitled “Rhythmic Qualities of Jazz Improvisation Predict Performer Identity and Style in Source-Separated Audio Recordings”. This was completed by Huw Cheston during his PhD at the CMS, and builds from two earlier publications...