An AI Dress Rehearsal: Exploring Music Performance and Interaction with Computational Models
Carlos Cancino-Chacón
The way a piece of music is performed is a very important factor influencing our
enjoyment of music. A good performance goes beyond a precise rendering of the
score; performers shape aspects like tempo, dynamics, and articulation to convey
emotion and engage listeners.
This talk focuses on a specific area of research: computational models of
expressive music performance. These models aim to codify hypotheses about
expressive performance using mathematical formulas or computer programs,
enabling systematic and quantitative analysis. The models serve two main
purposes: they allow us to systematically test hypotheses about how music is
performed, and they can be used as tools to create automated or semi-automated
performances in artistic and educational settings.
In this talk, I will explore two key aspects: data-driven approaches to modeling
expressive performance and interdisciplinary collaboration with music cognition
to understand how humans interact and develop expressive interpretations. I will
illustrate these aspects through three main topics: (1) Basis Function Models, a
machine learning framework for generating expressive performances based on
musical scores; (2) Studying human interaction in musical performance and
insights into the development of a real-time automatic accompaniment system; and
(3) The Rach3 Project, an investigation into how pianists learn new music and
develop their own expressive interpretations.
Overview
Program