ANC Workshop - 11th February 2025

The NeuroAI session

NeuroAI is a new name for research at the Nuero and ML interface. In this session we'll hear about three projects in this area that will cover representation learning, reinforcement learning and linking neural computations to behaviour.

 

Henrique Reis Aguiar

Asynchronous Hebbian/anti-Hebbian networks

Lateral inhibition models coupled with Hebbian plasticity have been shown to learn factorised causal representations of input stimuli, for instance, oriented edges are learned from natural images. Currently, these models require the recurrent dynamics to settle into a stable state before weight changes can be applied, which is not only biologically implausible, but also impractical for real-time learning systems. Here, we propose a new Hebbian learning rule which is implemented using plausible biological mechanisms that have been observed experimentally. We find that this rule allows for efficient, time-continuous learning of factorised representations, very similar to the classic noncontinuous Hebbian/anti-Hebbian learning. Furthermore, we show that this rule naturally prevents catastrophic forgetting when stimuli from different distributions are shown sequentially.

 

Ada Duan

Predictive learning of sensory experiences forms representations that support reward-based task learning

Animals learn to navigate environments and obtain rewards that are essential for survival efficiently, despite the reward signals being sparse. This capacity is partly due to the new learning doesn't happen from scratch, instead, it utilises rich internal representations that are shaped by previous life experience and evolution. Additionally, while learning to achieve a reward, animals can concurrently learn from all the sensory experiences that are not directly related to the reward signal. For example, learning to predict upcoming sensory experiences based on the animal's own actions allows representation learning to happen without additional supervision signals. We model experiments where animals navigate the environment to find and remember rewards, using either reinforcement learning (RL), or a combination of predictive learning and RL. We will compare the representational differences that arise and how they affect the performance of the task learning.

 

Ian Hawes, CDBS Edinburgh

Reverse engineering neural networks to disentangle single neuron and population-level contributions to spatial computation and memory

How does initial learning modify neural networks at both the single-neuron and population level? And how are these computations repurposed to support generalisation to related tasks? We address these questions in recurrent neural networks (RNNs) trained by reinforcement learning on a spatial memory task, and then retrained on related tasks. 

Our previous work shows that these networks generate neural activity similar to that recorded from the medial entorhinal cortex of real mice performing the same task. Here, we first characterise the computations in the initial network, finding that they display the hallmarks of a population-level computation, yet with small neuronal subpopulations having specific computational roles. With retraining to similar tasks, we find that learning is accelerated by leveraging initially-learned low-dimensional dynamics and single neuron activity. Finally, by confining plasticity to specific populations of neurons during retraining, we identify which synaptic weight changes are required for and aid generalisation.