Associate Professor, Brain and Cognitive Sciences
Associate Member, McGovern Institute
- Brain and Cognitive Sciences (BCS)
Ila Fiete is an Associate Professor in the Department of Brain and Cognitive Sciences and Associate Member of the McGovern Institute at MIT. She obtained her undergraduate degrees in Physics and Mathematics at the University of Michigan and her M.A. and Ph.D. in Physics at Harvard, under the guidance of Sebastian Seung at MIT. Her postdoctoral work was at the Kavli Institute for Theoretical Physics at Santa Barbara, and at Caltech, where she was a Broad Fellow. She was subsequently on the faculty of the University of Texas at Austin in the Center for Learning and Memory. Ila Fiete is an HHMI Faculty Scholar. She has been a McKnight Scholar, an ONR Young Investigator, an Alfred P. Sloan Foundation Fellow and a Searle Scholar.
The Fiete Lab
Our group seeks to understand why the brain contains particular codes, how the architecture and dynamics of neural circuits shape such codes, and how coding states evolve to perform computations that unfold over time. We are specifically interested in questions of learning, memory, integration, inference, and cognitive representations in the brain. Our tools are numerical and theoretical, and our approach includes working closely with collaborators on specific experimental systems.
Coding: In principle, the brain could encode information about a variable in any of a myriad ways. The choice of coding scheme sheds light on the computational priorities of the brain in representing that variable. For instance, codes can differ in capacity, ease of readout by downstream areas, or noise tolerance. Understanding a neural code means not only learning what or how much is encoded, but learning the tradeoffs of the coding scheme, to see “why” it was selected.
Plasticity, learning and memory: What kinds of network connectivity support robust integration, memory, and representation of the world? How do such structures emerge through development and plasticity? What are effective mechanisms for unsupervised and supervised learning in the brain? We study these questions through theory and simulation of neural circuits.
Error correction: Representations in the brain are necessarily noisy and varying because of apparently stochastic dynamics in neurons and synapses. Avoiding problems that can arise from these processes, especially in memory systems where noise accumulates, requires aggressive error reduction and correction, but our understanding of how the brain does this is at best primitive. We investigate representations and mechanisms for error control, bringing together coding and dynamical considerations.
Theoretically-motivated analysis of neural data: We analyze neural data with a view toward discovering mechanism, specifically by testing predictions of theoretical models. We close the theory-experiment loop through detailed comparisons of theory and data and through collaborative design of experiments to effectively discriminate between models.