TLDR: A new research paper introduces the Recurrent Divisive Normalization (RDN) model, a unified neural circuit that combines divisive normalization and self-excitation. This model demonstrates how the brain can achieve both robust, noise-resistant information encoding and stable, persistent memory maintenance within a single cortical microcircuit. Its capabilities were shown in tasks involving perceptual denoising and probabilistic inference, offering a novel framework for understanding brain computation and guiding the design of bio-inspired artificial intelligence.
Understanding how the brain efficiently processes information, filters out noise, and maintains memories has long been a central challenge in neuroscience. Traditionally, models have addressed these critical functions—noise-resistant processing and information maintenance—using separate neural mechanisms. However, a new research paper introduces a unified framework that integrates both operations within a single cortical circuit, offering fresh insights into the brain’s fundamental computations.
The paper, titled “A Unified Cortical Circuit Model with Divisive Normalization and Self-Excitation for Robust Representation and Memory Maintenance,” proposes a novel recurrent neural circuit model. This model combines two key mechanisms: divisive normalization, which helps filter out irrelevant variability and preserve essential signal features, and self-excitation, which enables information to be held and represented over time to support memory and planning.
The Recurrent Divisive Normalization (RDN) Model
At its core, the proposed Recurrent Divisive Normalization (RDN) model consists of excitatory neurons connected to a global inhibitory pool. Each excitatory neuron receives external input and also feeds back onto itself (self-excitation), with its activity then divided by the overall activity of the inhibitory pool. This structure allows the model to perform both robust encoding and stable retention of normalized inputs.
Mathematical analysis of the RDN model reveals that, under specific parameter conditions, the system forms a continuous attractor. This means it can stabilize inputs proportionally during their presentation and, crucially, maintain self-sustained memory states even after the original stimulus is removed. This dual capability addresses a significant gap in previous models, where normalization circuits typically faded after input removal, and attractor models, while preserving persistent activity, often lacked adaptability.
Demonstrating Versatility in Cognitive Tasks
To showcase the model’s versatility, the researchers applied it to two canonical cognitive tasks:
-
Noise-Robust Encoding in a Random-Dot Kinematogram (RDK) Paradigm: In this task, participants typically discern motion direction from noisy visual stimuli. The RDN model effectively filtered out noise from the input signals, producing significantly denoised readouts. This demonstrated its ability to achieve robust noise suppression, aligning with how the brain denoises sensory inputs to form reliable estimates of motion.
-
Approximate Bayesian Belief Updating in a Probabilistic Wisconsin Card Sorting Test (pWCST): The WCST is a classic test of cognitive flexibility, requiring individuals to learn and switch rules based on feedback. The RDN model successfully performed rapid rule-switching detection in deterministic versions of the task and maintained robust performance even when feedback was probabilistic. This highlights the model’s capacity for approximate Bayesian inference, crucial for adaptive behavior in uncertain environments.
Also Read:
- Advancing Spiking Neural Networks with Single-Timestep Processing and Adaptive Optimization
- Navigating the Unknown: How AI Agents Learn Without Knowing Their Own Past Actions
Bridging Fundamental Research Areas
This work establishes a unified mathematical framework that bridges traditionally separate research areas: noise suppression, working memory, and approximate Bayesian inference. By demonstrating how divisive normalization and attractor dynamics can coexist and synergize within a single cortical microcircuit, the RDN model offers a parsimonious alternative to more complex modular architectures that require specialized subsystems for different functions.
The findings suggest that common computational principles may underlie diverse neural functions, providing a plausible neural-circuit implementation of the Bayesian brain hypothesis. While the model simplifies some biological details and relies on pre-tuned weights, it opens new avenues for understanding cortical efficiency and guiding the design of more adaptive and biologically plausible artificial neural systems.
For more in-depth information, you can read the full research paper here.


