Friday 21 November - 11am

Speaker: Marco Valentino (University of Sheffield)

Title: Reconciling Plausible and Formal Reasoning in Large Language Models

Abstract: A persistent challenge in AI is the effective integration of plausible and formal reasoning - the former concerning the plausibility and contextual relevance of arguments, the latter focusing on their logical and structural validity. Large Language Models (LLMs) are not immune to such a challenge. By virtue of their extensive pre-training, LLMs can generate plausible and linguistically fluent arguments, but struggle with the systematicity and consistency required for robust logical reasoning. At the same time, LLMs offer new opportunities to study and overcome this intrinsic conflict. This talk will focus on such opportunities, presenting different research directions aimed at reconciling plausible and formal reasoning, including LLM-driven neuro-symbolic integration, quasi-symbolic abstractions, and latent circuit disentanglement. The final part of the talk will discuss the persisting challenges in achieving truly unified reasoning and outline possible directions for future research in the field.

Biography: Marco is a Lecturer in Artificial Intelligence and Applications of AI in the Natural Language Processing (NLP) group at the University of Sheffield. Prior to Sheffield, he was a member of the Neuro-Symbolic AI Group at the Idiap Research Institute in Switzerland, and obtained a PhD in Computer Science from the University of Manchester. His research focuses on developing AI systems that can use explanation as a core mechanism for learning and reasoning, investigating the integration of neural and symbolic AI methods. Moreover, he is interested in developing methodologies to interpret, control, and evaluate Large Language Models (LLMs), with a focus on disentangling knowledge acquisition from abstract logical reasoning, and enabling out-of-distribution, out-of-domain generalisation.