Tuesday, 6th June - 11am Oana Camburu : Seminar Title: Can AI Models Give Us Correct, Faithful, and Consistent Natural Language Explanations for Their Predictions? Abstract: As the current advances in AI are making AI models increasingly more present in our lives, having human-interpretable explanations for their predictions is not only of regulatory importance in high-risk domains but also a way to generally improve the trustworthiness and usefulness of the models. To this end, we argue that explanations should first be correct, otherwise, a model that appears to be "right for the wrong reasons" would likely be of little usage for end users. Second, the explanations should be faithful to the internal decision-making process of the model, otherwise, a model "lying" to its users may dangerously influence their perception of the model and, in turn, their decisions. Third, a model that appears to "not make up its mind" about its reasons (i.e., with inconsistent explanations) would be unlikely to be trustworthy. In this talk, we will see a series of models, benchmarks, adversarial attacks, and metrics to examine the capabilities of AI models to explain their predictions in natural language. We will see applications in NLP, computer vision, and the medical domain. The talk will also point to a series of open questions in this research direction. Bio: Oana-Maria Camburu is a Senior Research Fellow in the Department of Computer Science at the University College London, holding an Early Career Leverhulme Fellowship. Prior to this, Oana was a postdoc in the Department of Computer Science at the University of Oxford, from where she also obtained her PhD in "Explaining Deep Neural Networks". Her main research interests lie in explainability for deep learning models, with applications in natural language processing and vision-language tasks, for which she received several fellowships and grants. Add to your calendar vCal iCal Jun 06 2023 11.00 - 12.00 Tuesday, 6th June - 11am Oana Camburu : Seminar This event is co-organised by ILCC and by the UKRI Centre for Doctoral Training in Natural Language Processing, https://nlp-cdt.ac.uk. Bayes Centre G.03 and by online invitation via - https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZjgyYTVjZTMtNTI3NC00MTJiLWIxOWQtY2M2NGE1YzdmZWQ4%40thread.v2/0?context=%7b%22Tid%22%3a%222e9f06b0-1669-4589-8789-10a06934dc61%22%2c%22Oid%22%3a%22238d77b9-3bc1-4644-a53b-7df4bdedb86d%22%7d Contact
Tuesday, 6th June - 11am Oana Camburu : Seminar Title: Can AI Models Give Us Correct, Faithful, and Consistent Natural Language Explanations for Their Predictions? Abstract: As the current advances in AI are making AI models increasingly more present in our lives, having human-interpretable explanations for their predictions is not only of regulatory importance in high-risk domains but also a way to generally improve the trustworthiness and usefulness of the models. To this end, we argue that explanations should first be correct, otherwise, a model that appears to be "right for the wrong reasons" would likely be of little usage for end users. Second, the explanations should be faithful to the internal decision-making process of the model, otherwise, a model "lying" to its users may dangerously influence their perception of the model and, in turn, their decisions. Third, a model that appears to "not make up its mind" about its reasons (i.e., with inconsistent explanations) would be unlikely to be trustworthy. In this talk, we will see a series of models, benchmarks, adversarial attacks, and metrics to examine the capabilities of AI models to explain their predictions in natural language. We will see applications in NLP, computer vision, and the medical domain. The talk will also point to a series of open questions in this research direction. Bio: Oana-Maria Camburu is a Senior Research Fellow in the Department of Computer Science at the University College London, holding an Early Career Leverhulme Fellowship. Prior to this, Oana was a postdoc in the Department of Computer Science at the University of Oxford, from where she also obtained her PhD in "Explaining Deep Neural Networks". Her main research interests lie in explainability for deep learning models, with applications in natural language processing and vision-language tasks, for which she received several fellowships and grants. Add to your calendar vCal iCal Jun 06 2023 11.00 - 12.00 Tuesday, 6th June - 11am Oana Camburu : Seminar This event is co-organised by ILCC and by the UKRI Centre for Doctoral Training in Natural Language Processing, https://nlp-cdt.ac.uk. Bayes Centre G.03 and by online invitation via - https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZjgyYTVjZTMtNTI3NC00MTJiLWIxOWQtY2M2NGE1YzdmZWQ4%40thread.v2/0?context=%7b%22Tid%22%3a%222e9f06b0-1669-4589-8789-10a06934dc61%22%2c%22Oid%22%3a%22238d77b9-3bc1-4644-a53b-7df4bdedb86d%22%7d Contact
Jun 06 2023 11.00 - 12.00 Tuesday, 6th June - 11am Oana Camburu : Seminar This event is co-organised by ILCC and by the UKRI Centre for Doctoral Training in Natural Language Processing, https://nlp-cdt.ac.uk.