Friday, 11th October - 11am Simone Stumpf : Seminar Title: Five reasons why we need interdisciplinary research for Responsible AI Abstract: We are currently on the cusp of a revolution in AI technologies and many of them are being integrated into everyday life. The success of generative AI has multiplied fears that this technology is spinning out of control and needs to be clamped down on. As a consequence, there have been many voices that have called for AI to be more ‘responsible’. In this talk, I will review current efforts at developing responsible AI and where they fall short. I will show five areas where interdisciplinary research can help so that AI technology is trustworthy, safe and ethical. Bio: Dr Simone Stumpf is a Professor of Responsible and Interactive AI at the School of Computing Science at University of Glasgow. She has a long-standing research focus on user interactions with machine learning systems. Her research includes self-management systems for people living with long-term conditions, developing teachable object recognisers for people who are blind or low vision, and investigating AI fairness. Her work has contributed to Explainable AI (XAI) through the Explanatory Debugging approach for interactive machine learning, providing design principles for enabling better human-computer interaction and investigating the effects of greater transparency. The prime aim of her work is to empower all users to use AI effectively. https://www.gla.ac.uk/schools/computing/staff/simonestumpf/ Oct 11 2024 11.00 - 12.00 Friday, 11th October - 11am Simone Stumpf : Seminar This event is co-organised by ILCC and by the UKRI Centre for Doctoral Training in Natural Language Processing, https://nlp-cdt.ac.uk. IF G.03 and on Teams Contact
Friday, 11th October - 11am Simone Stumpf : Seminar Title: Five reasons why we need interdisciplinary research for Responsible AI Abstract: We are currently on the cusp of a revolution in AI technologies and many of them are being integrated into everyday life. The success of generative AI has multiplied fears that this technology is spinning out of control and needs to be clamped down on. As a consequence, there have been many voices that have called for AI to be more ‘responsible’. In this talk, I will review current efforts at developing responsible AI and where they fall short. I will show five areas where interdisciplinary research can help so that AI technology is trustworthy, safe and ethical. Bio: Dr Simone Stumpf is a Professor of Responsible and Interactive AI at the School of Computing Science at University of Glasgow. She has a long-standing research focus on user interactions with machine learning systems. Her research includes self-management systems for people living with long-term conditions, developing teachable object recognisers for people who are blind or low vision, and investigating AI fairness. Her work has contributed to Explainable AI (XAI) through the Explanatory Debugging approach for interactive machine learning, providing design principles for enabling better human-computer interaction and investigating the effects of greater transparency. The prime aim of her work is to empower all users to use AI effectively. https://www.gla.ac.uk/schools/computing/staff/simonestumpf/ Oct 11 2024 11.00 - 12.00 Friday, 11th October - 11am Simone Stumpf : Seminar This event is co-organised by ILCC and by the UKRI Centre for Doctoral Training in Natural Language Processing, https://nlp-cdt.ac.uk. IF G.03 and on Teams Contact
Oct 11 2024 11.00 - 12.00 Friday, 11th October - 11am Simone Stumpf : Seminar This event is co-organised by ILCC and by the UKRI Centre for Doctoral Training in Natural Language Processing, https://nlp-cdt.ac.uk.