Ram Ramamoorthy and colleagues explore specifying for trustworthiness

[08/01/2024] Professor Ram Ramamoorthy and colleagues from UKRI TAS programme explore the challenges of specifying what trust for autonomous systems means in a recent article published in the Communications of the ACM.

Headshot Photo of Ram Ramamoorthy

Autonomous Systems (AS) - systems that can take action with little or no human supervision – are increasingly moving away from safety-controlled industrial settings and becoming part of our daily lives: driverless cars, healthcare robotics, and uncrewed aerial vehicles (UAVs) are more common and interact more closely with humans. UKRI Trustworthy Autonomous Systems programme is focused on researching how to ensure that AS are trusted and trustworthy. 

What does trust mean in autonomous systems?

Trust may vary, as it can be gained and lost over time. Different research disciplines define trust in different ways.  

Researchers within TAS look at the challenges of specifying what trustworthiness of autonomous systems means. In their recent paper, they focus on the notion of trust that concerns the relationship between humans and AS and they explore key "intellectual challenges" to ensure that autonomous systems can be trusted. These challenges are not specific to one particular field and are impacted by the unpredictable situations in which autonomous systems have to work. 

The article takes a broad view of specification, concentrating on top-level requirements including, but not limited to, functionality, safety, security, and other non-functional properties that contribute to the trustworthiness of AS.  

Various systems pose different challenges: whether they use a single agent or a group of them, or whether they are assisting or collaborating with humans all has an impact on how to specify their trustworthiness. 

Key challenges in automated driving and healthcare

For example, two key challenges in the area of automated driving are the lack of machine-readable specifications that formally express acceptable driving behaviour and the need to specify the actions of other road users. U.K. Highway Code asks drivers to not "pull out into traffic so as to cause another driver to slow down." Without further constraints on what the other drivers could possibly do, it is difficult to define the right behaviour for a system. If we make assumptions during this process and those assumptions are not met, we risks raising concerns about the safety of the entire system. 

In healthcare, the use of AI and AI-enabled autonomy already benefits more accurate and automated diagnostics, autonomy in robot surgery, and entirely new approaches to drug discovery and design. 

However, gaps in test accuracy remain. The challenge in this field is to account for differences in tools and operators, as well as various conditions and severity levels. When using deep learning to automate interpretation, it's crucial to ensure the system can explain its decisions. This is to prevent situations where the AI system might achieve high accuracy by taking shortcuts, and using irrelevant information instead of correctly identifying the primary problem. The specific challenge here is how to specify for 'black box' models. 

The paper came out of joint efforts across the whole UKRI TAS programme, beginning with a workshop in the 2021 TAS All Hands Meeting. 

UKRI Trustworthy Autonomous Systems Programme

The TAS programme is a collaborative UK-based platform comprised of Research Nodes and a Hub, united by the purpose of developing world-leading best practice for the design, regulation and operation of autonomous systems. The central aim of the programme is to ensure that the autonomous systems are ‘socially beneficial’, protect people’s personal freedoms and safeguard physical and mental wellbeing. 

The project addresses public concern and potential risks associated with Autonomous Systems by making sure they are both trustworthy by design and trusted by those that use them, from individuals to society and government. It is only by addressing these issues of public concern and potential risk that autonomous systems will be trusted, allowing them to grow and be utilised more and more. 

TAS is comprised of seven distinct research strands termed Nodes: trust, responsibility, resilience, security, functionality, verifiability and governance and regulation. Each Node will receive just over £3 million in funding from UKRI to conduct their research. 

TAS Governance and Regulation Research Node

Led by Professor Subramanian Ramamoorthy from the School of Informatics and Edinburgh Centre for Robotics, the team are tasked with developing the governance and regulation of Trustworthy Autonomous Systems (TAS). By developing a novel framework for the certification, assurance and legality of TAS, the project will address whether such systems can be used safely. 

Related links


TAS Governance and Regulation Research Node