Friday 8 March 2026 - 11am

Speaker: Charles McGhee (University of Cambridge)

Title: Segmental Pronunciation Training with Consistent Articulatory Representations

Abstract: When a learner of a second language (L2) fails to produce the sounds of that language, it can affect their confidence and discourage them from engaging in spoken conversation. Computer-Assisted Pronunciation Training (CAPT) systems provide an assessment-feedback loop that enables learners to address pronunciation errors in an objective, stress-free environment. The feedback step is crucial to overcoming errors that occur due to first language (L1) influence and may not be easily resolved through the learner's own perception of their pronunciation. Many CAPT systems are based on Automatic Speech Recognition (ASR). These systems are not only reliant on accurate non-native ASR but are limited to text descriptions and template diagrams of any articulatory correction required. Acoustic-to-Articulatory Inversion (AAI) is the process of estimating articulatory movements from speech and offers a more direct approach to assess and visually display the articulatory differences between learner and reference speech. In this talk, we will explore the challenges of using AAI for L2 English pronunciation training, including the difficulty of producing consistent articulatory output across different speakers and balancing both visual and audio feedback.

Biography: Charles McGhee is a fourth-year PhD student at the University of Cambridge. He is a member of the Automated Language Teaching and Assessment (ALTA) group within the Department of Engineering, where he is supervised by Prof. Kate Knill. His work focuses on articulatory inversion and synthesis with applications to pronunciation training and facial animation. Prior to joining Cambridge, he completed an MSc in Speech and Language Processing (SLP) at the University of Edinburgh.