Talks that have been given by our members explaining areas of ELIAI research Causal Knowledge Graphs for Counterfactual Claims Reasoning Project Summary Multimodal Interpretability from Partial Sight Project Summary Asking Your Self-Driving Car to Explain its Decisions Project Summary Grounding Actions and Action Modifiers in Instructional Videos Project Summary Understanding adverbs in videos (Davide Moltisanti) Abstract: Given a video showing a person performing an action, we are interested in understanding how the action is performed (e.g. chop quickly/finely, etc). Current methods for this underexplored task model adverbs as invertible action modifiers in a joint visual-text embedding space. However, these methods do not guide the model to look for salient visual cues in the video to learn how actions are performed. We thus suspect models learn spurious data correlations rather than actually learning the visual signature of an adverb. We first aim to demonstrate this, showing that when videos are altered (e.g. objects are masked, playback is edited) adverb recognition performance does not drop considerably. To address this limitation, we then plan to design a mixture-of-experts method that is trained to look for specific visual cues, e.g. the model should look at temporal dynamics for speed adverbs (e.g. quickly/slowly) or at spatial regions for completeness adverbs (e.g. fully/partially). Learning Action Changes by Measuring Verb-Adverb Textual Relationships (Davide Moltisanti) Abstract: The goal of this work is to understand the way actions are performed in videos. That is, given a video, we aim to predict an adverb indicating a modification applied to the action (e.g. cut “finely”). We cast this problem as a regression task. We measure textual relationships between verbs and adverbs to generate a regression target representing the action change we aim to learn. We test our approach on a range of datasets and achieve state-of-the-art results on both adverb prediction and antonym classification. Furthermore, we outperform previous work when we lift two commonly assumed conditions: the availability of action labels during testing and the pairing of adverbs as antonyms. Existing datasets for adverb recognition are either noisy, which makes learning difficult, or contain actions whose appearance is not influenced by adverbs, which makes evaluation less reliable. To address this, we collect a new high quality dataset: Adverbs in Recipes (AIR). We focus on instructional recipes videos, curating a set of actions that exhibit meaningful visual changes when performed differently. Videos in AIR are more tightly trimmed and were manually reviewed by multiple annotators to ensure high labelling quality. Results show that models learn better from AIR given its cleaner videos. At the same time, adverb prediction on AIR is challenging for models, demonstrating that there is considerable room for improvement. Learning Action Changes by Measuring Verb-Adverb Textual Relationships (Davide Moltisanti) The goal of this work is to understand the way actions are performed in videos. That is, given a video, we aim to predict an adverb indicating a modification applied to the action (e.g. cut “finely”). We cast this problem as a regression task. We measure textual relationships between verbs and adverbs to generate a regression target representing the action change we aim to learn. We test our approach on a range of datasets and achieve state-of-the-art results on both adverb prediction and antonym classification. Furthermore, we outperform previous work when we lift two commonly assumed conditions: the availability of action labels during testing and the pairing of adverbs as antonyms. Existing datasets for adverb recognition are either noisy, which makes learning difficult, or contain actions whose appearance is not influenced by adverbs, which makes evaluation less reliable. To address this, we collect a new high quality dataset: Adverbs in Recipes (AIR). We focus on instructional recipes videos, curating a set of actions that exhibit meaningful visual changes when performed differently. Videos in AIR are more tightly trimmed and were manually reviewed by multiple annotators to ensure high labelling quality. Results show that models learn better from AIR given its cleaner videos. At the same time, adverb prediction on AIR is challenging for models, demonstrating that there is considerable room for improvement. Gradient-based Learning of Complex Latent Structures Project Summary Verified Neural Network Training for Safe Human-Robot Interaction Project Summary Multimodal Integration for Sample-Efficient Deep Reinforcement Learning Project Summary This article was published on 2024-11-22