skip to content

Centre for Music and Science

 

Musical training has traditionally relied heavily on access to formal education, which is often expensive and not accessible to everyone. Even for those with access, formal education typically provides only one or two contact hours per week, leaving significant independent study time without teacher guidance.

We aim to develop computer software to assist with this independent study. Such software can be much more affordable than having a teacher supervise one's studies.

One area of interest is developing music perception and appreciation skills. While there are already commercial software packages like Auralia and Duolingo Music, there is still room for innovation, such as broadening to a wider range of musical styles or addressing higher levels of music appreciation.

Another area of interest is developing performance skills, such as sound production on voice and instruments, playing by ear, or learning specific pieces of music. These projects can leverage recent advancements in automatic sound analysis to help the algorithm understand how the user is singing or playing. The algorithm can then interpret this input, provide feedback, and make recommendations for improvement.

It is easy to envision an algorithm giving feedback on basic aspects of music performance that can be objectively described, such as pitch and rhythm accuracy. A more stimulating challenge is providing feedback on subjective aspects of music interpretation, such as rubato.

Aside from critiquing an individual performance, an algorithm could also be used to track and assess the user’s practice strategies. The quality of practice is known to be an important predictor of musical development, yet practice is normally not monitored by teachers, so students do not commonly receive feedback on it. Depending on the intended user, the app could be used to incentivise good practice techniques through the use of virtual awards, streaks that can be shared on social media, and so on.

CMS Logo

Latest news

New paper: Artist identification using rhythm via machine learning

21 October 2024

We are excited to share our new paper appearing in the journal Royal Society Open Science, entitled “Rhythmic Qualities of Jazz Improvisation Predict Performer Identity and Style in Source-Separated Audio Recordings”. This was completed by Huw Cheston during his PhD at the CMS, and builds from two earlier publications...

New paper: Computational analysis of improvised music at scale

1 October 2024

Our new paper entitled “Jazz Trio Database: Automated Annotation of Jazz Piano Trio Recordings Processed Using Audio Source Separation” is just published in Transactions of the International Society of Music Information Retrieval (TISMIR). This paper arises from work completed by Huw Cheston during his PhD at the CMS, in...

New paper: Coordinating online music performances

1 October 2024

Our new paper entitled “Trade-offs in Coordination Strategies for Duet Jazz Performances Subject to Network Delay and Jitter” has just been published in Music Perception. This paper arises from work completed by Huw Cheston during the early stages of his PhD at the CMS, and was funded by an award from Cambridge Digital...