Friday, 25th October - 11am Arabella Sinclair : Seminar

Title:  Are Language Models good models of human linguistic behaviour? An example from Structural Priming

Abstract:  

Language Models (LMs), like humans, are exposed to examples of language, and learn to comprehend and produce language via these examples. Under certain assumptions about the context within which these examples are processed (e.g. LMs only observe written or transcribed language, no additional modalities, or differentiation across language producers), LMs can serve as cognitive models of human language processing, able to predict comprehension and production behaviour. Structural Priming is one such paradigm to evaluate comprehension and production behaviour in humans, whereby listeners are likely to more readily comprehend a target sentence after being recently exposed to a prime sentence of the same structure.

This talk will present work investigating structural priming in LMs and compare LM and human behaviour when primed.  I will firstly present work which finds evidence of structural priming behaviour in a large suite of LMs (Sinclair et al 2022). In follow up work we explore what factors predict their priming behaviour and whether these factors are similar to those predicting human priming (Jumelet et al 2024). I will end with discussing some recent work which directly compares LM and human responses to the same stimuli. These findings have implications for understanding the in-context-learning capabilities of LMs, the extent to which LMs can serve as models of comprehension or production, as well as highlighting macro-vs micro level behaviour differences between LM and Human responses when presented with the same stimuli.

Bio:

I am very interested in exploring how humans adapt to one another as they interact or communicate. To that end, my work involves analysing human properties of language in a dialogue setting, analysing whether machine generated text contains human-like linguistic properties, and, more recently, analysing whether modern language models exhibit similar patterns to humans when exposed to certain linguistic phenomena. I am interested in applying my work to an educational setting, as well as exploring more general communicative properties of language. My research interests include Natural Language Processing, Computational Linguistics, Cognitive Science and Education.

My undergraduate degree was in Computer Science at the University of Aberdeen, where I gained an interest in Natural Language Processing. I then went on to do a MPhil in advanced computer science at the University of Cambridge, where I was supervised by Simone Teufel. I then spent just under a year in Berlin working as a back end programmer in a software start up, after which I returned to Scotland to do my PhD in Computational Linguistics at the University of Edinburgh, where I worked with Jon Oberlander, Dragan Gašević , Adam Lopez and Chris Lucas. I then went on to do a postdoc at the University of Amsterdam, where I was part of the Dialogue Modelling Group, working with Raquel Fernández.