About Me

I am a computational neuroscientist studying how biological and artificial systems efficiently learn models of the world and use them flexibly to make optimal decisions. My work focuses on reinforcement learning, representation learning, and biologically plausible learning rules, with the goal of identifying computational principles that support rapid generalization and adaptive behavior. I develop theory-driven agents grounded in experimental data to explain neural computation and failure modes of learning, and to translate these insights into improved artificial systems and tools that support learning and education.


Experience

  • 2026–Present — Postdoctoral Fellow, Max Planck Institute for Biological Cybernetics
  • 2023–2025 — Postdoctoral Fellow, SEAS, Harvard University
  • 2022–2023 — Research Scientist, Centre for Frontier AI Research (CFAR), A*STAR
  • 2017–2018 — Research Engineer, A*STAR Artificial Intelligence Initiative

Education

  • Ph.D. Computational Neuroscience, National University of Singapore (2022)
  • B.Sc. Life Sciences, USP & SPS, National University of Singapore (2017)

Honors & Awards


News

  • 03/2026 — Poster on scaling feature learning in hippocampal place-field model at COSYNE 2026
  • 09/2025 — Talk on meta-reinforcement learning agents for suboptimal behavior at Rick Adams Lab, UCL
  • 08/2025 — Poster on suboptimal agents from meta-RL at CCN 2025

Selected Publications

Predictive coding of reward in the hippocampus
Yaghoubi, M., Kumar, M. G., Nieto-Posadas, A., et al.
Nature, 2025
A Model of Place Field Reorganization During Reward Maximization
Kumar, M. G., Bordelon, B., Zavatone-Veth, J., Pehlevan, C.
ICML, 2025
DetermiNet: A Large-Scale Diagnostic Dataset for Complex Visually-Grounded Referencing using Determiners
Lee, C., Kumar, M. G., Tan, C.
ICCV, 2023
A nonlinear hidden layer enables actor–critic agents to learn multiple paired association navigation
Kumar, M. G., et al.
Cerebral Cortex, 2022