ANC Workshop - 4th February 2025

Speaker: Andrew Nam (Princeton University)

Title: Discrete, compositional, and symbolic representations through attractor dynamics (main reference)

Abstract: Symbolic systems are powerful frameworks for modeling cognitive processes as they encapsulate the rules and relationships fundamental to many aspects of human reasoning and behavior. Central to these models are systematicity, compositionality, and productivity, making them invaluable in both cognitive science and artificial intelligence. However, certain limitations remain. For instance, the integration of structured symbolic processes and latent sub-symbolic processes has been implemented at the computational level through fiat methods such as quantization or softmax sampling, which assume, rather than derive, the operations underpinning discretization and symbolicization. In this work, we introduce a novel neural stochastic dynamical systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT). Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives. Moreover, like PLoT, our model learns to sample a diverse distribution of attractor states that reflect the mutual information between the input data and the symbolic encodings. This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuro- plausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations.

Bio: Andrew is a postdoctoral researcher at Princeton University in the AI Lab, Natural and Artificial Minds (NAM). He earned his bachelor’s degree in computer science and economics from UC Berkeley and completed his PhD in psychology at Stanford University under Jay McClelland. His research focused on abstract reasoning, rapid learning, and out-of-distribution generalization in biological and artificial intelligence systems. Currently, he studies latent representations and circuits in neural networks through a cognitive neuroscientific lens.

 

Speaker: Siddarth Venkatraman (Université de Montréal)

Title: Amortizing intractable inference in diffusion models for vision, language, and control (main reference)

Abstract: Diffusion models have proven to be powerful distribution estimators in vision, language, and reinforcement learning. However, using these models as priors in downstream tasks—where additional constraints or likelihoods must be incorporated—presents a challenging posterior sampling problem. This talk introduces Relative Trajectory Balance, an asymptotically unbiased approach to sampling from such posteriors. Applications include classifier-guided image generation, text infilling with discrete diffusion language models, offline reinforcement learning, and inverse problems in computer vision and astrophysics.

Bio: Siddarth Venkatraman is a PhD student at Mila — Québec AI Institute, advised by Professor Glen Berseth. His research explores the intersection of probabilistic inference and reinforcement learning.