ANC Workshop - 18/02/2025

The Structured Representation/Concept-Learning session

In this session, we'll hear about three projects that explore different aspects of thinking about hierarchical structure and compositions to facilitate effective perceptual learning and understanding.

Speaker: Mattia Opper

Title: Discrete Structure and Neural Networks: Inductive Biases and Opportunities

Abstract: Generally neural networks are tasked with learning connective structures between input tokens implicitly, if at all. What happens when we make this explicit? In other words, what happens when we task a neural network with learning to route information in a discrete, systematic fashion. We present two networks. First the Self-Structuring AutoEncoder which learns representations that both define their own structure and are in turn defined by it. Essentially forcing a structural constraint on its representations such that they come to define their own compositional compression pathways. Self-StrAE operates on a per sequence basis. Its extension Banyan further extends this capability to a global level learning connective structures over potentially whole corpora. Both these networks require both minimal parameters and compute resources and yet produce high quality representations in an efficient manner thanks to the structural constraints imposed on them. We provide an overview of how they operate and what makes them succeed. Finally, we conclude by presenting a pathway for future research to use structure not only as an effective inductive bias but as a potentially more flexible alternative to inject discreteness where it can truly be beneficial in and of itself: for flexible tokenisation.

 

Speaker: Magdalena Proszewska

Title: Graph concept learning and pattern discovery with Graph Neural Networks

Abstract: Traditional Graph Neural Networks (GNNs), built on message passing, iteratively update node representations based on their neighbors but struggle to capture higher-order structures and meaningful patterns in datasets. Concept-based GNNs offer an alternative by processing subgraphs rather than individual nodes, enabling the learning of interpretable, human-aligned concepts. Approaches like Graph Kernel Networks and Prototypical GNNs identify recurring structural patterns, making decisions more transparent and inherently interpretable. Most Explainable AI (XAI) methods for GNNs focus on post-hoc explainers and instance-level explanations, often providing local insights without a clear global understanding of the model’s reasoning. Concept-based GNNs shift the focus from explaining black-box models to designing self-explainable architectures that balance interpretability and predictive performance. In this talk, we will explore these models and discuss their XAI capabilities.

 

Speaker: Ivan Wegner

Title: Reviving Fodor and Pylyshin's Challenge in Compositionality Research.

Abstract: Systematicity, frequently considered an aspect of compositionality, is viewed as a desirable property of ML models. This has prompted a number of proposed benchmarks aiming to test for systematicity, as well introducing models or training regimes aiming to produce systematic behaviour. Oftentimes these proposals are phrased as 'addressing the challenge posed by Fodor and Pylyshyn (F&P)'. However, F&P argue for systematicity of representations, whereas the benchmarks and models proposed usually only look at systematicity of behaviour.

We want to draw attention to the crucial methodological distinction between systematicity of behaviour and systematicity of representations. We assess what dependencies may exist between systematic behaviour and systematic representations, and what the importance of each is. We also analyze the extent to which systematicity is tested by key benchmarks in the literature - whether behavioural or representational, and the strength of the systematicity required, as assessed by Hadley (1994). Finally, we discuss ways of assessing systematicity of representations in ML models as practiced in the mechanistic interpretability field.