Informatics researchers to help make autonomous systems more responsible

[2021] Nadin Kokciyan and Michael Rovatsos will be working on one of the strands of a multi-disciplinary project that seeks to address responsibility gaps in autonomous systems. Informatics researchers will focus on the development of new techniques and tools for making autonomous systems more answerable. The project is led by Professor Shannon Vallor, Director of the Centre for Technomoral Futures at the Edinburgh Futures Institute.

Image
Nadin Kokciyan and Michael Rovatsos

Drawing on research in philosophy, cognitive science, law and AI, the project will develop new ways for autonomous system developers, users and regulators to bridge responsibility gaps by boosting the systems’ answerability. 

Making machines better at giving answers 

We are surrounded by autonomous systems, be it self-driving cars, virtual assistants or plane autopilots, and increasingly, these solutions are also making their way to high-stakes areas such as health and finance. This creates a vital need to ensure we can trust these systems. A key to maintaining social trust is the ability to hold others responsible for their actions and it is no different for autonomous systems. 

However, it might not be possible to identify the person who is morally responsible for an action. Such responsibility gaps can also occur in machines, or systems that include machines: when their actions may not be under the control of a morally responsible person, or may not be fully understandable or predictable by humans due to complex 'black-box' algorithms driving these actions.  

Answerability - ensuring someone is 'answerable' for an act -= is an important component of responsibility. It encompasses a richer set of responsibility practices than explainability in computing, or accountability in law and its main purpose is to supply people who are affected by our actions with the answers they need or expect. The overall ambition of the project is to make systems as a whole (machines and people) better at giving the answers that are owed.  

How to design an answerable machine 

Informatics researchers will aim to improve the autonomous systems’ ability to carry out a dialogue within a larger process in which human developers, users, and regulators can work with the system to bring forward expected answers.  

To achieve that a formal representation language needs to be developed first. This will help capture the semantics of questions and answers. 

The next step of the process will see researchers develop a computational framework. A mediator agent will be included in the framework to facilitate communication among stakeholders and guide the process with relevant questions and answers at appropriate points. This will support stakeholders in putting forward their arguments in a structured manner using constructs provided by the platform.  

Last but not least, the identified responsibility gaps and gathered feedback will be used as input to the target AI system and to help clarify the requirements it needs to satisfy to improve its answerability in the future. 

This work will help create tools and guides for enhancing system answerability through design. 

As AI researchers, we have to develop sociotechnical systems that address the stakeholder's needs first. We will bridge the responsibility gaps by building trustworthy autonomous systems with answerability in mind. We are really excited to be part of this multidisciplinary project. 

Nadin Kokciyan
Lecturer in Artificial Intelligence, School of Informatics

UKRI Trustworthy Autonomous Systems Programme 

The project is funded from the UKRI Trustworthy Autonomous Systems Programme: Responsibility Grant.  

UKRI Trustworthy Autonomous Systems Programme is a collaborative UK-based platform comprised of Research Nodes and a Hub, united by the purpose of developing world-leading best practice for the design, regulation and operation of autonomous systems. The central aim of the programme is to ensure that the autonomous systems are ‘socially beneficial’, protect people’s personal freedoms and safeguard physical and mental wellbeing. TAS Governance and Regulation Research Node is led by Professor Subramanian Ramamoorthy from the School of Informatics.

Related links

PPLS News Story

Trustworthy Autonomous Systems