Our Research Mission
How does our brain make sense of the world?
We perceive the world by rapidly transforming streams of meaningless sensory signals into meaningful tokens, such as hearing the word ‘brain’ or seeing an apple. How can our brain do this so quickly, efficiently, and robustly? The key that may unlock this ability is prediction: The brain is constantly forming predictions of its input, which are compared with incoming information to update predictions, in a virtuous cycle.
We aim to understand how the brain manages to make sense of the world so rapidly and robustly, based on a vast amount of noisy and ambiguous sensory inputs.
Our guiding hypothesis is that the brain constructs predictive internal models of its environment using a process of self-supervised learning, and compares the predictions from this model with incoming sensory inputs. This predictive processing strategy may enable both efficient encoding of incoming signals, rapid and robust perceptual inference based on those signals, and continuous learning of the complex and hierarchical statistical regularities that exist in the world.
The ability of the human brain to generate predictions about yet unseen data - and to update its models based on the mismatch between predicted and actual data – may be a fundamental building block underlying the intelligence of the human mind.
We examine the computational and neural implementation of prediction in perception and cognition. We use an integrative and multidisciplinary approach, by investigating and comparing predictive processing in different modalities (visual, auditory, language), under both constrained and naturalistic conditions, using complementary techniques (psychophysics, fMRI, MEG, AI-inspired computational modeling) and species (mouse, monkey, human).
The aim of our research is to contribute fundamental knowledge of the general operating principles of the brain that enable us to understand our surroundings and successfully interact with them.
Research Interests
We are interested in understanding how sensory information and prior expectations are dynamically combined in the brain. Using a combination of neuroimaging tools (fMRI, MEG), computational models, and psychophysical paradigms, we study the form and neural implementation of such predictions in vision (e.g., Fritsche et al., 2020; Kok et al., 2017), audition (Kern et al., 2022; Todorovic et al., 2012) and language (Heilbron et al., 2022; Heilbron et al., 2020), in both experimentally controlled and more naturalistic environments.
Image: Expectations Induce Pre-Stimulus Sensory Templates: It has been proposed that prior expectations may induce stimulus templates in sensory cortex before the actual presentation of the stimulus, leading to improved processing of a predicted percept. Using MEG and multivariate decoding techniques to probe the representational content of neural signals in a time-resolved manner showed that the neural signals evoked by expected gratings could decode stimulus identity already before stimulus presentation. Hence, showing that expectations induce pre-activation of stimulus templates (see Figure). Check out Kok et al., 2017 to read this and all their other fascinating results.
We explore how the brain learns about a variety of statistical regularities in the world to build internal predictive models that aid perception and cognition.
For example, we study differences between incidental and intentional learning of regularities (Ferrari et al., 2022), the importance of goal-directed attention (Richter et al., 2019), and the learning rules of statistical learning (Nazli et al., 2024).
Image: Statistical Learning of Spatial Structures in Visual Scenes: Participants were familiarized with arbitrary displays consisting of four objects (E1-E4) in a specific layout. After familiarization, neural activity was recorded while participants passively watched both familiar displays (A), and shuffled displays (B; containing the same objects, but in a surprising spatial layout). Neural activity (C) was stronger in the entire ventral visual stream, encompassing the primary visual cortex, lateral occipital complex, and parahippocampal cortex. These results highlight how violations of implicitly learned spatial regularities modulate activity in the visual system. To find out more about what and how the brain exploits the statistics of the environment visit: Yan et al., 2023.
Evidence from anatomical, physiological, and behavioral studies has shown that the visual cortex employs feedforward, lateral recurrence, and feedback recurrence during sensory information processing. However, the specific roles of these different types of information processing are still debated. We investigate the consequences of various types of recurrence in artificial neural networks (ANNs), and their alignment to biological neural networks. Moreover, we empirically study feedforward and feedback processes in the human brain using ultrahigh-resolution fMRI at 7 Tesla to image layer-resolved neural activity.
Image: Laminar Organization of Internally Generated Signals in Early Visual Cortex: Human early visual cortex is not only activated by visual information but also by top-down cognitive processes, such as keeping a visual stimulus in working memory (A). These internally generated signals do not resemble perception, meaning they must be organized differently. Using laminar fMRI,we found that internally generated signals are kept apart from sensory data by targeting deep and superficial layers of the cortex, while bottom-up information activates middle layers. For more detail, take a look at Lawrence et al., 2018.
Empirical studies support the notion that prior knowledge strongly influences sensory and cognitive processes, yet the computational mechanisms underlying this influence remain relatively unknown. To explore these mechanisms, we use a variety of neuroscience-inspired AI algorithms (Artificial Neural Networks, ANNs), as models of neural information processing.
Image: High-Level Surprise Modulations in Early Visual Cortex: While numerous studies have shown stronger neural activity for surprising compared to familiar inputs, it is unclear what expectations are made across the cortical hierarchy, and therefore what kind of surprise drives this upregulation of activity. Here, ANNs were leveraged to quantify surprise at different levels of granularity to try to get a closer look at what modulates neural activity. The results highlighted above show how in both low- and high-level visual cortex activity primarily scaled with high-level, but not low-level, visual surprise, suggesting that high-level predictions may help constrain perceptual interpretations in earlier areas. If you are curious about diving deeper into these results and their interpretations, take a look at Richter et al., 2024.
Key Publications
- Review: de Lange FP, Heilbron M, Kok P (2018). How Do Expectations Shape Perception? Trends in Cognitive Sciences, 22 (9), 764-779. doi.org/10.1016/j.tics.2018.06.002 [PDF].
- Laminar fMRI: Lawrence SJD, Norris DG, de Lange FP (2019). Dissociable laminar profiles of concurrent bottom-up and top-down modulation in the human visual cortex. eLife, 8, e44422. doi.org/10.7554/eLife.44422 [PDF] [Data and Code]
- MEG decoding: Kok P, Mostert P, de Lange FP (2017). Prior expectations induce pre-stimulus sensory templates. PNAS, 114 (39), 10473-10478. doi.org/10.1073/pnas.1705652114 [PDF] [Data and Code]
- Computational modeling: Fritsche M, Spaak E, de Lange FP (2020). A Bayesian and efficient observer model explains concurrent attractive and repulsive history biases in visual perception. eLife, 9, e55389 . doi.org/10.7554/eLife.55389 [PDF] [Data and Code]
- NeuroAI: Richter D, Kietzmann T, de Lange FP (2024). High-level visual prediction errors in early visual cortex. PLoS Biol, 22 (11), e3002829. doi.org/10.1371/journal.pbio.3002829 [PDF] [Data and Code]