How is data structure affecting the learning dynamics? Can we design learning paradigms that take advantage of such structure? Can prior knowledge help in learning? How? My research lies at the intersection between theoretical machine learning and neuroscience, and gravitates around these questions. In few words, I like to construct simple models that capture emerging phenomena in learning by identifying few key relevant parameters. Using methods from statistical physics, I analyze such models and obtain exact equations that lead to a quantitative insight on the underlying process.
I earned my PhD from Imperial College London under the supervision of Claudia Clopath. During my doctoral studies and subsequent postdoctoral position, I focused on developing mechanistic models of synaptic plasticity. Seeking to further collaborate with experimental researchers, I joined Athena Akrami’s lab at the SWC where we developed a cross-species research project investigating decision-making in rodents, humans, and computational models. Recently, I joined the Saxe lab to delve deeper into the intersection between experimental and theoretical neuroscience, with a particular interest in the mechanisms and fundamentals underlying learning and memory formation.
I’m a student on the 1+3 doctoral programme in neuroscience, co-supervised by Andrew Saxe and Adam Packer. My project involves devising and testing experimental predictions about network dynamics across the visual cortical hierarchy under different theories, in response to activity perturbations, during sensory experience and throughout the course of perceptual learning. Previously, I studied neuroscience at Bristol University and spent time in industry at Roche, where I studied the development of excitatory-inhibitory balance in human stem cell derived neurons.
I am interested in machine learning paradigms involving multiple tasks, such as continual learning (tasks in sequence) and meta-learning (task distributions). I want to develop a better understanding of phenomena in deep neural networks in these problem settings. I am jointly supervised by Claudia Clopath at Imperial College London. Before starting my PhD, I completed my Master’s in computer science with Andrew Saxe in Oxford.
Before joining the lab as a DPhil student, I studied Cognitive Science at the University of Osnabrück and Computational Neuroscience at the Bernstein Center for Computational Neuroscience in Berlin. My research aims at developing mathematical tools and at using simulation studies to understand learning dynamics of gradient-based algorithms and how they apply to learning in biological neural networks.
Can learning in the brain approximate end-to-end learning like gradient descent? How do brain-wide changes cumulate to better task performance? I am tackling these and other questions by investigating the role of midbrain dopamine neurons (and their projections to the striatum) in mice as they learn to perform a visual decision task, from day 1 to expert performance.
My aim is to build a theoretical framework to link the observed behaviour with the measured dopamine release and neural recordings, and then look at possible extensions to the model which can account for continual learning (i.e. learning a second task after training on the first) and the cognitive control of learning (i.e. the decision of whether and how much of our learning abilities to invest in learning a new task).
I am a second year student at Gatsby co-supervised by Andrew Saxe and Felix Hill. I’m interested in how humans and AIs can form abstract concepts and transfer them across tasks (incl. different state spaces). Currently, I study this through the lens of Sudoku-esque Nikoli puzzles, which offer a great test bed where concept learning is easily quantifiable and high-level concepts are known to apply across puzzles. I’m also generally interested in large language models, and their emergent abilities.
In my free time, I love to play strategy board games, listen to sci-fi/fantasy audiobooks, and vibe to (mostly Ed Sheeran) music.
I am particularly interested in studying the computational neural theories at the basis of learning and memory consolidation in neuronal networks. I am interested in working toward answering questions such as: How does the brain create, store, generalize and update memories without interfering with previously stored memory? What is the function of episodic memory? I am looking forward to making advances in answering these questions, working at the intersection between theoretical neuroscience research and machine learning.
Whether a novel task is worth learning, how effort may impact learning, and how much effort to allocate towards learning are important questions agents face. To answer these questions, I’m currently working in models of cognitive control for learning systems. Particularly how control signals might shape the learning dynamics in linear networks. I’m also collaborating with Clementine Domine (somewhere in this same page) to build a software for people to test their models on hippocampus and entorhinal cortex, on simulated environments that somehow resemble the experimental setting, to directly compare with neural data recorded from each experiment. I have a broad set of interests, such as neuroscience, machine learning, biology, and philosophy in science. Feel free to reach if you want to chat!
I am a student on the GUDTP 1+3 in Experimental Psychology at the university of Oxford co-advised by Chris summerfield and Andrew Saxe.
My interests range from Cognitive Neuroscience and Psychology to Computational Neuroscience and Machine Learning. My MSc work investigates semantic learning. Specifically, I will examine if behavioural and representational changes during the learning of semantic knowledge are analogous to those observed in deep linear networks. To achieve this end, we employ behavioural experiments, neuroimaging, and modelling experiments.