Friday 16 January 2026 - 11am

Speaker: Matthew Wiesner (Johns Hopkins University)

Title: Modelling Accent and Language at Scale 

Abstract: State-of-the-art LID models work reliably for ∼100 languages, but there are orders of magnitude more accents and dialects; annotating all of them is intractable. Furthermore, current LID models fail dramatically when applied to accented speech, and very few annotated accented data exist to support accented TTS. This talk explores these challenges and proposes a potential partial solution using massive amounts of diverse and widely available data collected from radio, with soft language labels in the form of geolocations. The talk then explores the link between robustness to accented speech and the capacity to more effectively model sequence-level data. Finally, these models and insights are shown to greatly improve LID on accented speech, and can be used to mine for accented speech to support scalable, controllable accented TTS.

Biography: Matthew Wiesner is a researcher at the Johns Hopkins University Human Language Technology Center of Excellence (HLTCOE) and a visiting researcher at the Laboratoire Interdisciplinaire des Sciences du Numérique (LISN formerly LIMSI).  He received his PhD and MS in Electrical Engineering from Johns Hopkins University in 2021 and 2016 under the supervision of Jan Trmal and Sanjeev Khudanpur, and his Bachelor's degree from McGill University in Electrical Engineering in 2013. His research interests are broadly focused on speech processing with an emphasis on multilinguality. He previously worked primarily on automatic speech recognition, speech translation, and keyword search. More recently, his work has focused on language identification, voice anonymization and multi-talker ASR.