Dr Edoardo Ponti wins €1.5 million ERC Starting Grant to pioneer adaptive, efficient foundation models

Dr Edoardo Ponti, Lecturer in Natural Language Processing at the University of Edinburgh has been awarded a prestigious five-year €1.5 million European Research Council (ERC) Starting Grant for the project “Adaptive Tokenization and Memory in Foundation Models for Efficient and Long-Horizon AI”. The project aims to re-engineer how large AI systems represent and remember information, making them far more efficient and sustainable.

Dr Edoardo Ponti

AToM-FM holds promise to usher in a new era of green AI that is not bottlenecked by its energy demand, by re-imagining the "atomic" units that AI models use to represent and memorise information.

The challenge with foundation models

Foundation Models (FMs) – large, general-purpose AI systems trained on vast amounts of unlabelled data – have delivered huge advances across many tasks. However, their ever-increasing scale causes serious negative effects such as their unsustainable energy demand and environmental pollution. The scale of these FMs also jeopardises data privacy, as their use requires users to rely on third-party servers rather than edge devices (hardware capable of running AI algorithms locally, or near the source of data generation). 

These issues stem in part from a fixed-granularity approach to representation: current models break inputs into tokens and update memory in uniform ways, regardless of how complex or simple the task is. This one size fits all design wastes huge amounts of computational effort. 

The AToM-FM approach

AToM-FM will tackle this inefficiency at its source. As the key technical breakthrough, the project will develop adaptive tokenisation and memory mechanisms that adjust the “resolution” of representation to the complexity of the problem. In practice, the model will allocate computational effort where it is needed and conserve it where it is not – spending more on complex reasoning or fine-grained perception, and less on routine or redundant content.  

The project’s name is a deliberate wordplay on reimagining the “atomic units” of model representations (tokens and memory items). 

Transformative impact

By making tokenisation and memory adaptive, AToM-FM will reduce wasted computation. Models will consume less electricity and produce fewer data‑centre emissions, helping to make AI more sustainable. Just as importantly, efficiency gains make it possible to run models directly on devices you control – phones, laptops, and local servers. This keeps data private, ensures faster responses, and enables systems to function even with poor connectivity. 

By reducing memory demands, models can also handle and generate much more information. This paves the way for permanent memory – essential for lifelong learning – and to longer reasoning chains that strengthen capabilities in planning and mathematical problem solving. 

AToM-FM is also designed to support modality-agnostic and fine-grained inputs. This means it can integrate low-resource data types, such as molecules, astronomical images, or sensor readings, and ground AI systems more effectively in real-world environments. 

Finally, rather than requiring entirely new systems, this project emphasises retrofitting and repurposing existing models into more efficient adaptive architectures, accelerating adoption and impact. 

Related links