Browse the cohort 2 student profiles here (by alphabetical order of their first names). Aradhika Bagchi ResearchPrivacy-enhancing technologies (PETs) and their social impactsSupervisor(s)Tariq Elahi, Tara Capel My PhD work looks at privacy-enhancing technologies (PETs) and their broader social impacts, particularly in relation to end-to-end encrypted communications, AI assistants, and surveillance systems. I believe that achieving truly equitable data privacy solutions requires developing frameworks that are context-sensitive and capable of addressing systemic inequities.I completed my undergraduate Bsc(Hons) degree in Computer Science at Cardiff University and went on to do a master’s (MS) at Columbia University.At Columbia University, my research focused on the application of differential privacy to sensitive datasets, investigating how we can design privacy frameworks that offer stronger guarantees, particularly for marginalized communities. Arran Carter ResearchMachine Learning and statistical models for time series.Supervisor(s)Victor Elvira, Nikolay Malkin I have previously worked on high-dimensional Bayesian inference problems in Bayesian machine learning - in particular I worked on improving methods for more efficient sampling/inference in Bayesian neural networks. I studied for my undergraduate degree in the School of Mathematics here at the University of Edinburgh. During my degree I was especially interested in Bayesian statistics as well as probabilistic modelling and machine learning. Benjamin Redhead ResearchAdaptive foundation models for time-series forecasting under distribution shiftSupervisor(s)Amos Storkey, Victor Elvira My Website My research interest is in the field of foundation models for time-series forecasting, specifically with regards to adaptability to domain shift, edge forecasting, and data privacy. I have a particular interest in leveraging continual and federated learning for urban forecasting systems. While my research is primarily grounded in time-series, the methods I explore aim to inform broader seq2seq foundation model research.I pursued my Master of Science in Computer Software and Theory at the School of Computer Science of Peking University in China, under the supervision of Prof. Zhi Jin. I worked primarily on a heterogeneous mixture-of-experts approach to robust time-series forecasting, which formed the basis of my master’s thesis as well as presenting this as a guest speaker at Microsoft Research Asia. My undergraduate degree is in Mathematics, Operational Research, Statistics, and Economics (MORSE) from Lancaster University. Outside of academics I am interested in Eastern and Western philosophy, Mandarin, and tea culture. Karen Zheng Researchreliable and privacy-preserving machine learning systemsSupervisor(s)Jingjie Li, Luo Mai I’m keen to collaborate on projects that make machine learning safer and more accountable at scale.I recently graduated with a BSc (Hons) in Computer Science from the University of Edinburgh. My interests are in building reliable, privacy-preserving machine learning systems and pipelines, from data collection and evaluation through to deployment. I’ve worked across academia and industry in software engineering and information security, and I enjoy turning research ideas into practical ML systems. Lakshmanan Lakshmanan ResearchOptimising and accelerating LM inference in the domain of systemsSupervisor(s)Boris Grot, Luo Mai My core goal is improving performance for specific workloads. I did my undergrad and masters in electronics at the International Institute of Information Technology, Hyderabad, India. Most of my research work then was primarily in the domain of embedded systems for multiple uses, which introduced me to computer science. I worked on porting a Cortex M4 processor in a proprietary Texas Instruments SBC to the Zephyr RTOS as part of an internship I undertook. I also worked extensively on performance characterisation for serverless applications as a visiting research student at the University of Edinburgh. For my PhD, I'd like to improve small LM inference on mobile devices for better efficiency and hardware utilisation, with deliberate emphasis to make the optimisations general and scalable, to allow for better performance at the datacenter scale as well. Tairan Xu ResearchEfficient AI SystemSupervisor(s)Luo Mai By exploiting sparsity and optimising system-level designs, I aim to unlock the potential of large-scale models while maintaining practical deployment efficiency without compromising model quality. My background spans across HPC, Machine Learning, and Financial Engineering. My research focuses on improving AI system efficiency across multiple scales, from kernel-level optimizations to large-scale distributed clusters. Currently, I am working on accelerating Mixture-of-Experts (MoE) inference and investigating both inference and training of sparse models on cutting-edge hardware platforms. Reuben Carolan ResearchCompilers and programming language developmentSupervisor(s)Jackson Woodruff I've recently completed my BA in Computer Science at the University of Cambridge. For my dissertation I built my own compiler that compiled functional programs into Interaction Nets and evaluated them with multiple threads. My PhD research focuses on compilers and how programs can be made to run efficiently on a variety of architectures. Yangshen Deng ResearchScalable Machine Learning systems and data infrastructuresSupervisor(s)Luo Mai I am passionate about building systems. My goal is to develop scalable systems for AI with elegant abstraction, interface and principles.I hold a bachelor's degree from Beijing University of Science and Technology and a master's from Southern University of Science and Technology. My current research is focused on the intersection of Machine Learning and Systems (MLsys). My research interests span the fields of databases and AI infrastructure. Ultimately, my goal is to develop robust, scalable systems for AI with elegant abstractions and principles, making it easier for researchers and engineers to deploy the next generation of artificial intelligence. Yibo Ma ResearchNetwork digital twinning and energy efficiency in networked systemsSupervisor(s)Mahesh Marina My research focuses on Network Digital Twinning and energy efficiency in networked systems, utilizing data-driven approaches to understand and optimize these systems. I work with real data and simulations to reduce energy use and carbon emissions. I’m particularly interested in how Network Digital Twin technology can make large-scale systems, such as mobile networks and data centers, more efficient and sustainable. Ziqi Zhou ResearchComputer vision, multimodal learning, video understandingSupervisor(s)Laura Sevilla My Website I completed my bachelor’s degree in Computer Science and Technology at the University of Chinese Academy of Sciences, followed by a master’s degree at the Institute of Automation, Chinese Academy of Sciences. Going forward, I am eager to explore how these models can capture higher-level concepts such as aesthetics, controllability, and interpretability, and to apply them in creative and impactful ways.I worked on projects ranging from GCN-based mesh denoising to multimodal-driven and controllable talking face generation. I also gained industry experience through an internship at JD.com, where I worked on visual feature learning for advertisement recommendation systems.These experiences have shaped my broader research interests in computer vision and multimodal learning, particularly at the intersection of understanding and generation with large-scale models. Zoey Shepherd ResearchPL, particularly program synthesis of specialised languagesSupervisor(s)Elizabeth Polgreen, Omad Kammar Aside from research, I also enjoy teaching, and I'm learning about quantum computing for fun.Working on another bunch of letters to put with BSc and MPhil. Actually likes working with lambda calculi. She/her 🏳️⚧️ I studied at Durham for my undergraduate and Cambridge for masters, both in computer science, and developed an interest in the theory of computation and PL theory which led to my current research interests. Currently, I'm working on program synthesis techniques that involve restricting the syntactic variation allowed in a language in order to enumerate semantically distinct programs more quickly. This article was published on 2025-10-24