Neuroscience-AI 2025 Symposium

Tuesday, 29 April 2025

9.00AM to 4.00PM

Centre for Life Sciences Auditorium

National University of Singapore

28 Medical Dr, Singapore 117456

About the Symposium

The Neuroscience-AI 2025 Symposium brings together leading researchers at the intersection of neuroscience and artificial intelligence to explore cutting-edge developments in both fields. This event, organized by the Singapore Chapter of the Society for Neuroscience (SfN.SG), aims to foster interdisciplinary collaboration and showcase the latest advances in brain-inspired computing, neural decoding, and AI applications in neuroscience.

The symposium will feature keynote talks, invited presentations, flash talks from selected posters, and networking opportunities for researchers across academia and industry.


Venue Information

The symposium will be held at the Centre for Life Sciences Auditorium, located in the Life Sciences Institute at the National University of Singapore.

Address: 28 Medical Dr, Singapore 117456


Scientific Programme

Time Details
08.30 - 09.00 Registration
09.00 - 09.10 Opening address: Caroline Wee, Serenella Tolomeo, M Ganesh Kumar
Session 1: How AI transforms Neuroscience (Chair: M Ganesh Kumar)
09.10 - 09.35 Mengmi Zhang (NTU)
What AI cannot do but humans can: closing research gaps with neuroscience-inspired approaches
09.35 - 10.00 Gerald Pao (Okinawa Institute for Science and Technology)
Manifolds of brain activity that predict behavior without latent variables
10.00 - 10.25 Yisong Yue (Caltech)
Foundational Models in Neuroscience
10.25 - 10.55 Coffee Break
10.55 - 11.15 Flash Talks selected from posters (Chair: Stuart Derbyshire)
(4 x 5 min)
William Tong, Harvard
Maytus Piriyajitakonkij, A*STAR
Rohini Elora Das / Rajarshi Das, New York University
Mingye Wang, Columbia University
Session 2: How Neuroscience inspires AI (Chair: Mengmi Zhang)
11.15 - 11.40 Cheston Tan (A*STAR)
Insights from Neuroscience and Psychology for AI in an Era of Large Generative Models
11.40 - 12.05 Seng-Beng Ho (AI Institute Global)
Rethink Neuroscience and AI
12.05 - 12.30 Dmitry Krotov (IBM-MIT)
Dense Associative Memory and its potential role in brain computation
12.30 - 12.35 Sponsor Talk: CAIRA
From Insight to Impact: Ensuring Clarity in the Era of AI-Driven Discovery
12.35 - 13.45 Lunch and Poster Session
Session 3: How neuroscience meets AI: collaborative paths to translation (Chair: Serenella Tolomeo)
13.45 - 14.10 Iris Groen (University of Amsterdam)
Alignment of visual representations in AI and human brains: beyond object recognition
14.10 - 14.35 Janet Wiles (University of Queensland)
Human-centre AI in the Florence project: Make my day go better
14.35 - 14.55 Flash Talks selected from posters (Chair: Stuart Derbyshire)
(4 x 5 min)
Camilo Libedinsky / Yichen Qian, NUS
Shreya Kapoor, Freidrich-Alexander University
Samuel Lippl, Columbia University
John Vastola, Harvard University
14.55 - 15.00 Closing Remarks and Announcements
15.00 - 16.00 Poster session (continued) and refreshments
SfN.SG Annual General Meeting (all are welcome)
16.00 - 16.05 Welcome Remarks: Caroline Wee
16.05 - 16.35 Keynote Talk: Camilo Libedinsky
When we have robots that think and behave like humans, will they be conscious?
16.35 - 17.00 AGM Presentation: Caroline Wee and Chris Asplund
17.00 - 18.00 Networking Reception

Speakers

Mengmi Zhang

Nanyang Technological University
What AI cannot do but humans can: closing research gaps with neuroscience-inspired approaches

The fields of neuroscience and AI have a long and intertwined history. From the study of simple and complex cells in visual areas of the brain to the success of deep neural networks in many real-world applications, experimental and theoretical neuroscience has contributed significantly to designing smarter machines. In turn, AI models help us better understand brain computations that underlie biological intelligence. In my talk, I will present several efforts to decipher brain function by building computational models and quantifying model behaviors with human benchmarks in several cognitive tasks, such as visual search, context reasoning, continual learning, working memory, and generalization. I will also discuss how these models not only help us understand the neural mechanisms driving human cognition but also inspire the development of more robust AI systems that replicate human-like decision-making, attention, reasoning, learning, and memory. Through these interdisciplinary efforts, we can pave the way for more intelligent, robust, and adaptable AI systems that mirror the complexities of biological intelligence.

Dr. Mengmi Zhang is a tenure-track assistant professor from College of Computing and Data Science at Nanyang Technological University, Singapore. She is also holding a joint appointment as a principal scientist and principal investigator in Agency for Science, Technology and Research (A*STAR), Singapore. Prior to this, Dr. Zhang was a postdoc with Gabriel Kreiman at Harvard-MIT Center for Brains, Minds, and Machines (2019-2021). Dr. Zhang is an awardee of the National Research Foundation Fellowship. She is also the recipient of Singapore 100 Women in Tech accolade. Dr. Zhang’s research background is at the intersection of artificial intelligence and computational neuroscience. She has made contributions to understanding visual attention, contextual reasoning, working memory, generalization, and continual learning. Dr. Zhang and her team are looking into how brain computations inspire new paths in AI and how AI can help elucidate brain computations.

Gerald Pao

Okinawa Institute for Science and Technology
Manifolds of brain activity that predict behavior without latent variables

In the last decade or so neuroscience has gone from predominantly single neuron recordings to large scale recordings of up to a million neurons. This transformation in data collection allows us to question more broadly what is the nature of the population code of large numbers of neurons and how these relate to behavior. Linear methods have shown poor predictive power. Common dimensionality reduction approaches fail to provide an understandable representation of the population code. The most common method is almost certainly, principal component analysis (PCA). However, to our knowledge up until now, existing dimensionality reduction methods produce latent variables that are experimentally not testable as the latent components do not have a direct correspondence to either brain areas or neurons. Here in the following work, we develop a couple novel dimensionality reduction algorithms that originate from a dynamical systems theory framework, the generalized Takens theorem, and its application for causal inference, the convergent cross-mapping (CCM) algorithm, to produce a predictive data driven geometric models based on low dimensional manifolds that generically map sensory input to brain activity to behavior without latent variables. Here every variable corresponds to either a single or a population of identifiable neurons or brain area. The dimensionally reduced representation is a manifold mapping that can predict future behaviors of the animal based on neural activity and is sensitive to sensory input where each orthogonal axis in the ambient space of the manifold corresponds to an observable neuron, population of neurons, or brain area. As such it allows for experimental verification as it produces falsifiable predictions of the contribution of candidate brain components that contribute to behavior. In addition, the direct 1:1 mapping of observables to orthogonal axes provides explainability for the relative contributions of each population of neurons in a state dependent manner. We name these first of these methods Causal Compression as it is grounded in CCM causal inference as its key element for the identification of orthogonal bases for attractor reconstruction. In the following example we show examples from Drosophila whole brain lightfield microscopy recordings, rat tetrodes recordings from 3 brain areas as well as whole brain human fMRI. We show that one can map brain activity to motor output using a manifold of surprisingly low dimensionality to predict future behaviors of drosophila, rats and humans. Manifolds discovered by causal compression can also be used to simulate large systems such as the brain if made into a network of manifolds. Here our method Generative manifold networks (GMN) generates a simulation of realistic whole brain activity of Drosophila as well as the motor output of the animal. We show that is scalable to single neuron resolution for over 100,000 neurons of a larval zebrafish.

Dr. Gerald Pao worked originally on the molecular evolution of proteins from a structural and computational perspective as an undergraduate at the University of California, San Diego working with Milton H. Saier, Joseph Kraut, Flossie Wong-Staal, Russell F Doolittle and Tony Hunter. From there he went on to be mainly an experimentalist to study the epigenetics of cancer and stem cells and the development of viral vectors for basic science at the Salk institute during his PhD and postdoc with Inder M Verma. Work on stem cells led him to study the axolotl (Ambystoma mexicanum) salamander during limb regeneration at the Salk Institute and UC Irvine with David Gardiner and Tony Hunter. This was followed by a change of field through a postdoctoral training period in applied mathematics and data science specializing in nonlinear dynamics at the Scripps Institution of Oceanography (SIO) in the climate atmospheric sciences and physical oceanography (CASPO) department with George Sugihara. After becoming a staff scientist at the Salk institute, he continued work on nonlinear dynamics mainly on causal inference in systems neuroscience and systems biology. He was also a visiting scientist at the National Institute of Industrial Science and Technology (AIST) of the Japanese Ministry of Economy Trade and Industry (METI) in the Information Technology Reseach Institute (ITRI) and the Artificial Intelligence Research Center (AIRC) to make the computational methods in empiric dynamic modeling suitable for Big Data and high performance computing (HPC) using Japan’s second fastest supercomputer ABCI. In addition to this work Dr. Pao identified, cloned and developed cephalopod reflectin proteins from various squid species for the manipulation of the optical refractive index in vivo in mammals. Before joining OIST he had a two year stint in industry as a research director for high throughput screening data science and gene therapy at Vertex pharmaceuticals, a Fortune 500 company.

Yisong Yue

California Institute of Technology
Foundational Models for Neuroscience

Foundation models offer the potential to transform many aspects of our society, and are already impacting areas such as software programming and generative art. In this talk, I will describe progress by my group and our collaborators in developing foundation-model enabled frameworks for scientific domains, including in neuroscience. Some tasks require developing so-called "LLM Agents" that can automatically reason over complex workflows, such as data analysis or hypothesis search. Other tasks require developing novel foundation or foundation-like models, such as for processing neural signals or behavioral videos.

Dr. Yisong Yue is a professor of Computing and Mathematical Sciences at the California Institute of Technology. His research interests lie primarily in machine learning, and span the entire theory-to-application spectrum from foundational advances all the way to deployment in real systems. He works closely with domain experts to understand the frontier challenges in applied machine learning, distill those challenges into mathematically precise formulations, and develop novel methods to tackle them.

Cheston Tan

A*STAR
Insights from Neuroscience and Psychology for AI in an Era of Large Generative Models

Despite recent rapid advances in large generative models (such as large language models, image/video diffusion models, etc.), these models still fall short in terms of being reliable, generalizable and efficient. In that regard, neuroscience and psychology can be natural sources of insights, as "human-level intelligence" is still widely considered to be a gold standard for a number of aspects. In this talk, I will present some early results demonstrating the usefulness of approaches to AI inspired by neuroscience and psychology, along with ideas for future work.

Dr. Cheston Tan graduated with a Bachelor of Science (Highest Honours) from the Department of Electrical Engineering and Computer Science of UC Berkeley, supported by a PSC Overseas Merit Scholarship. He obtained his Doctor of Philosophy from the Department of Brain and Cognitive Sciences at MIT, supported by the A*STAR National Science Scholarship (PhD). He has published many papers at top international conferences and journals in AI, vision and neuroscience, such as CVPR, ECCV, NeurIPS, IJCAI, PAMI, IJCNN, T-Cyb, T-HMS, PLoS One and Vision Research. He is an Associate Editor for IEEE Transactions on Cognitive and Developmental Systems, and has served as an Area Chair for ICLR and Senior Programme Committee member for AAAI and IJCAI. He has received a number of awards, including the A*STAR Talent Award and NeurIPS Travel Award. At MIT, he was awarded the Mark Gorenberg Graduate Student Fellowship and the Walle Nauta Award for Excellence in Graduate Teaching. At UC Berkeley, he won the Chancellor Tien Award for Engineering and Computer Science.

Seng-Beng Ho

AI Institute Global
Rethink Neuroscience and AI

Last century's AI was inspired by last century's neuroscience and psychology. This century's AI should be based on 21st century neuroscience and psychology. A potentially paradigm-changing view of “21st century” neuroscience and psychology presented by a group of University of Virginia scientists based on concrete new scientific data is reviewed. Novel phenomena such as near-death experiences (NDEs), published in established scientific journals such as The Lancet, present major challenges to the current view of the relationship between the mind and brain. They particularly challenge the current computational paradigm in AI which attempts to replicate the functioning of the brain, and the computational paradigm in cognitive science in general which attempts to explain how mental phenomena are generated by the brain’s operations. In this talk, we will use the work of David Marr, who was both a neuroscientist and computer vision pioneer, to illustrate the fundamental issues arising from this new view of the relationship between the mind and brain.

Dr. Ho obtained his Ph.D. in Cognitive Science (AI, Neuroscience, Psychology, and Linguistics) and M.Sc. in Computer Science from the University of Wisconsin, Madison, U.S.A. He has a B.E. in Electronic Engineering from the University of Western Australia. He has published a number of AI-related papers in international journals and conferences over the years and he is the author of a monograph published in June 2016 by Springer International entitled “Principles of Noology: Toward a Theory and Science of Intelligence”. In the book, he presents a principled and fundamental theoretical framework that is critical for building truly general AI systems. For 10 years, he was Senior and Principal Scientist at the Institute of High Performance Computing, A*STAR. For 11 years, he was President of E-Book Systems Pte Ltd, a company he founded that developed and marketed a novel 3D page-flipping interface for electronic books, with offices in the Silicon Valley, Beijing, Tokyo, Germany, and Singapore. He holds 36 U.S. and world-wide patents related to e-book technology. For 8 years, He lectured at the National University of Singapore on AI and Cognitive Science.

Dmitry Krotov

IBM-MIT
Dense Associative Memory and its potential role in brain computation

Dense Associative Memories (Dense AMs) are energy-based networks deeply rooted in the ideas of statistical physics and Ising-like spin models. In contrast to conventional Hopfield Networks, which were popular in the 1980s, DenseAMs have a very large memory storage capacity - possibly exponential in the size of the network. This aspect makes them appealing tools for many problems in AI and neurobiology. In this talk I will describe two theories of how DenseAMs might be built in biological “hardware”. According to the first theory, DenseAMs arise as effective theories after integrating out a large number of neuronal degrees of freedom. According to the second theory, astrocytes, a particular type of glia cells serve as core computational units enabling large memory storage capabilities. This second theory challenges a common point of view in the neuroscience community that astrocytes play the role of passive house-keeping support structures in the brain. In contrast, it suggests that astrocytes are actively involved in brain computation and memory storage and retrieval.

Dr. Dmitry Krotov is a physicist working on neural networks and machine learning. He is a member of the research staff at the MIT-IBM Watson AI Lab and IBM Research in Cambridge, MA. Prior to this, he was a member of the Institute for Advanced Study in Princeton. He received a PhD in Physics from Princeton University in 2014. His research aims to integrate computational ideas from neuroscience and physics into modern AI systems. His recent work focuses on high-memory storage capacity networks known as Dense Associative Memories.

Iris Groen

University of Amsterdam
Alignment of visual representations in AI and human brains: beyond object recognition

How does the brain represent the world around us? In recent years, the unprecedented ability of deep neural networks (DNNs) to predict neural responses in the human visual cortex has led to a lot of excitement about these models potential to capture human visual perception. However, studies demonstrating representational alignment of visual DNNs with humans typically use brain responses to static, isolated objects, while real-life visual perception requires processing of complex, dynamic scenes. In this talk, I will discuss work aiming to move the field beyond object recognition by focusing on scene perception, video perception, and data-driven alignment via brain-guided image generation.

Dr. Iris Groen is an Assistant Professor (tenured) and MacGillavry Fellow at the Video & Image Sense Lab at the Informatics Institute at the University of Amsterdam (UvA). She is also affiliated with the Department of Brain and Cognition at the Psychology Research Institute at the UvA. Her team studies vision in the human brain using measurement techniques such as EEG, fMRI and ECoG, in combination with computational models, including deep neural networks. The goal of her research is to find out how the human brain perceives and understands real-world images and videos. She also explores how we can use bio-inspired computations from human perception to improve AI.

Janet Wiles

University of Queensland
Human-centre AI in the Florence project: Make my day go better

The Florence project is designing communication technology to assist people living with dementia. Florence comprises a personalized knowledge bank, which supports an ecosystem of communication devices designed to enhance quality of life at home. What makes Florence different is the participatory design process with a team of living experience experts – people living with dementia – which is producing an integrated system that builds on people’s strengths with hardware, software, tutoring and data custodianship built into every component. Effective AI in this human-centred technology is an enabler of people’s everyday goals, serving as a calm computing element that contributes to quality of life.

Dr. Janet Wiles is a Professor in Human Centred Computing at the University of Queensland. Her multidisciplinary team co-designs language technologies to support people living with dementia and their carers and social robots for applications in health, education, and neuroscience. She is currently developing a citizen science project which uses insights from neuroscience and language technologies to explore the electrical characteristics of symbiotic fungi in local ecosystems. She received her PhD in computer science from the University of Sydney and completed a postdoctoral fellowship in psychology. She has 30 years’ experience in research and teaching in machine learning, artificial intelligence, bio-inspired computation, complex systems, visualisation, language technologies and social robotics, leading teams that span engineering, humanities, social sciences and neuroscience. She currently teaches research methods for thesis and masters students, and is developing a new course in human-centred AI. Previous special interest courses include a cross disciplinary course ”Voyages in Language Technologies” that introduced computing students to the diversity of Indigenous and non-Indigenous languages, and state-of-the-art tools for deep learning and other analysis techniques for working with language data.

Camilo Libedinsky

National University of Singapore
When we have robots that think and behave like humans, will they be conscious?

Dr. Camilo Libedinsky is an Associate Professor at the Department of Psychology of NUS, and a principal investigator at the N.1 Institute for Health (N.1). He received his B.Sc. from Universidad de Chile and his Ph.D. from Harvard University. His lab aims to understand the neural mechanisms underlying cognitive operations involved in intelligent behavior at the level of populations of neurons.


Poster Presentations

Poster # Author(s) Title
P1 William Tong Learning richness modulates equality reasoning in neural networks
P2 Yichen Qian Inferring Latent Behavioral Strategy from the Representational Geometry of Prefrontal Cortex Activity
P3 Serenella Tolomeo TBC
P4 Hu Mengjiao Empirical Dynamic Modeling for Accurate Prediction and Detection of State Transition in Physiological and Behavioral Time Series of Mice
P5 Ganesamoorthy Subramaniam Neural Correlates of Yogic Breathing
P6 Shreya Kapoor Computer Graphics from a Neuroscientist's perspective
P7 Dota Tianai Dong Understanding Multimodal Prediction in the Brain Using Multimodal Models
P8 Mozes Jacobs Traveling Waves Integrate Spatial Information Through Time
P9 Maytus Piriyajitakonkij Why we speak: Emergent Communication in Cooperative Foraging Agents
P10 Xiao Liu Reason from Context with Self-supervised Learning
P11 Rohini Elora Das Representational-Alignment in Theory-of-Mind Tasks Across Language Models/Agents
P12 Mingye Wang Brain-like slot representation for sequence working memory in recurrent neural networks
P13 Weicheng Tao Gazing at Rewards: Eye Movements as a Lens into Human and AI Decision-Making in Hybrid Visual Foraging
P14 Samuel Lippl The role of mixed selectivity and representation learning for compositional generalization
P15 Jiachuan Wang A Biologically Plausible Computational Model of Hippocampal Neurogenesis and Pattern Separation in Memory
P16 John Vastola A new perspective on dopamine: dopamine, cause-effect learning, and causal reinforcement learning
P17 Shuangpeng Han Deep Neural Networks Generalize to Biological Motion Perception

Organizing Committee

Caroline Wee
Principal Investigator, A*STAR
Ajay Mathuru
Associate Professor, Yale-NUS
Serenella Tolomeo
Senior Scientist, A*STAR
Sarah Luo
Principal Investigator, A*STAR
M Ganesh Kumar
Postdoctoral Fellow, Harvard
Valarie Tham
Research Officer, A*STAR