Informatics student wins best paper award for a study on using lessons learnt from steamboat accidents in developing AI governance

[13/10/2022] Bhargavi Ganesh, an Informatics PhD student, working on a project looking at a Responsibility Framework for Governing Trustworthy Autonomous Systems, won a best paper award at the We Robot 2022 conference. The paper co-authored by Professor Stuart Anderson from the School of Informatics and Professor Shannon Vallor from Edinburgh Futures Institute looks at lessons learned from US government responses to steamboat accidents and how they can be used nowadays in AI governance.

Image
Photo of Bhargavi Ganesh

AI is often described as an entirely new phenomenon in need of brand-new tools for its governance and regulation. The complexity of AI-based systems is often thought to suggest that governance of these systems is unmanageable in the absence of bold and unprecedented regulatory measures.  

If It Ain't Broke Don't Fix It

In their paper entitled “If It Ain't Broke Don't Fix It: Steamboat Accidents and their lessons for AI Governance”, the authors use the historical example of steamboat accidents in the 1800's to challenge this notion. They argue that there are already many governance tools at our disposal, as well as promising new policy recommendations, which, if implemented in a coordinated manner, can be effective in ensuring the safe and ethical deployment of AI-based systems. The paper highlights the constructive nature of US government responses to steamboat accidents, despite the limited governance resources available at the time.  

The authors note that the process of regulation is not linear and requires trial and error to accomplish its necessary aims. Many of those who remain skeptical about AI governance incorrectly portray regulation as being something that needs to be implemented at one snapshot in time. Instead, as technology and human activity with it co-evolve, the guardrails and policies needed to ensure their safety must continue to evolve as well. 

The authors also argue that the steamboat case study highlights the need for independent government auditors in the case of AI and indicates the potential usefulness of licensing mechanisms. However, they note that innovative approaches to governance will be needed in relation to AI to avoid the creation of perverse market incentives and exacerbation of existing power imbalances. 

How historical perspective helps

Finally, in noting some of the modern governance challenges posed by AI, the authors argue that maintaining a historical perspective helps to target these novelties more precisely when generating policy recommendations in their own interdisciplinary research group. 

They suggest that the lack of global standardization of AI regulation should be viewed as an opportunity rather than a hindrance for AI governance. During the steamboat era, Britain was inspired by US regulatory efforts to pass comprehensive steamboat regulation, while the US was inspired by Britain and France to adopt certain engineering standards and penalties for steamboat inspectors. In current discourse in AI, some suggest that there will be competing visions for AI governance, and we will have to see which vision ‘wins out’. This sets up a false and ahistorical picture of technology governance as a zero-sum game. Instead, AI governance would be better off if we viewed competing global efforts as a path to learning and promoting innovation in governance itself. 

I was honoured to have the opportunity to discuss this paper with some of the foremost legal scholars and policy experts in the area of AI policy and regulation. It was incredibly gratifying to receive the best paper award recognition from this distinguished group of scholars, and to see that the historical analogy was useful for many as they thought about the appropriate next steps in designing AI regulation.

Bhargavi Ganesh, PhD student at the School of Informatics

We Robot is an interdisciplinary conference bringing together leading scholars and practitioners to discuss legal and policy questions relating to robots since 2012. This year it was held from 1th – 16th September, at the University of Washington.