A 4 April 2023 workshop for ELIAI postdoctoral researchers to present and discuss their research Image The third Turing AI Postdocs Workshop, held on 4 April 2023, was hosted by ELIAI and organized by Director Mirella Lapata. The workshop provided an opportunity for the ELIAI postdoctoral researchers Xue Li (project title "Causal Knowledge Graphs for Counterfactual Claims Reasoning"), Davide Moltisanti (project title "Grounding Actions and Action Modifiers in Instructional Videos"), Victor Prokhorov (project title "Multimodal Interpretability from Partial Sight"), and Cheng Wang (project title "Asking your Self-Driving Car to Explain its Decisions") to present and discuss their most recent work on their respective projects. These workshops continue to be held periodically as the researchers progress on their project objectives. Newly hired Postdoc Mark Chevallier presented the project, "Constrained Neural Network Training via Theorem Proving, with Applications to Safe Human-Robot Interaction." Edoardo Ponti provided the review of "Gradient-based Learning of Complex Latent Structures" as Postdoc Emile van Krieken will begin his role on 1 September, 2023. Amos Storkey provided an update on the postdoc recruitment for "Multimodal Integration for Sample-Efficient Deep Reinforcement Learning." Xue Li Project: Causal Knowledge Graphs for Counterfactual Claims Reasoning Talk Title: Knowledge Graphs based Misinformation Detection on Counterfactual Claims about Climate Change Abstract: Climate change is a crisis that requires global action. As social media significantly impacts people's opinions, it is essential to reduce the harmful misinformation about climate change spreading there. However, a single AI approach is not enough for misinformation detection (MD), especially for counterfactual claims. In this project, we aim to develop a system that will apply NLP techniques to parse a given counterfactual claim and then query knowledge graphs (KGs) to get evidence for the MD in the climate change domain. In addition, probability will be computed to represent how much the conclusion is trusted. In this talk, we will present the logic for determining the truth value of a given counterfactual claim, the current experiments on the entailment prediction based on language models, our dataset and then summarise future work. Davide Moltisanti Project: Grounding Actions and Action Modifiers in Instructional Videos Talk Title: Modelling Object Changes to Understand Adverbs in Videos Abstract: Adverb recognition is the task of understanding how an action is performed in a video (e.g. "chop something finely or coarsely"). Objects carry a strong visual signal regarding the way actions are performed. For example, if we chop parsley coarsely, the final state of the vegetable will look quite different compared to how it would look if we chopped it finely. In other words, the way objects transition from a state into another can help us understand the way actions are performed. Current approaches for this task ignore this, and in this talk we will explore ideas on how we can model object changes to understand action changes in videos. Victor Prokhorov Project: Multimodal Interpretability from Partial Sight Talk Title: Multimodal Interpretability from Partial Sight Abstract: We seek to build DGMs that capture the joint distribution over co-observed visual and language data (e.g. abstract scenes, COCO, VQA), while faithfully capturing the conceptual mapping between the observations in an interpretable manner. This relies on two key observations: (a) perceptual domains (e.g. images) are inherently interpretable, and (b) a key characteristic of useful abstractions are that they are low(er) dimensional (than the data) and correspond to some conceptually meaningful component of the observation. We will seek to leverage recent work on conditional neural processes (Garnelo et al, 2018) to develop partial-image representations to mediate effectively, and in an interpretable manner, between vision and language data. Evaluation of this framework will involve both the ability to generate multimodal data against state-of-the-art approaches, as well as on human-measured interpretability of the learnt representations. Our project image represents multi-modal data(images, text) as a "partial specification" that allows effective encoding and reconstruction of data. Cheng Wang Project: Asking your Self-Driving Car to Explain its Decisions Talk Title: Causal and Social Explanations for Autonomous Vehicle Motion Planning Abstract: Artificial Intelligence (AI) has shown considerable success in autonomous vehicle (AV) perception, localization and decision-making. For such a safety-critical system, safe AI is a prerequisite before bringing AVs to market. Thus, investigating effective methods to improve AI safety is critical. One step towards safe AI is to make AI explainable and transparent. With this motivation, we present a novel framework to generate causal explanations for autonomous vehicles decisions. Natural language conversations are used to provide explanations in response to a wide range of user queries that require associative, interventionist, or counterfactual causal reasoning. Our method relies on a generative model of interactions to simulate counterfactual worlds which are used to identify the salient causes behind decisions. The method is tested in simulated scenarios with coupled interactions. The results demonstrate that our method correctly identifies and ranks the relevant causes while also delivering concise answers to the users' queries. This article was published on 2024-11-22