TLDR: A new research paper, “On the Variational Costs of Changing Our Minds,” proposes that human cognitive biases like confirmation bias and motivated reasoning are adaptive responses to the significant cognitive, pragmatic, and social costs associated with revising beliefs. The authors, David Hyland and Mahault Albarracin, introduce a formal model that quantifies these costs using informational distance (KL divergence) and demonstrates how varying parameters for conservatism and likelihood weighting can qualitatively reproduce these biases. The framework suggests that belief updating is a motivated decision where the utility of a belief is weighed against the effort and consequences of changing it, offering a resource-rational explanation for why altering our convictions is so challenging.
The human mind is an incredible tool, capable of complex thought and innovation. Yet, it often seems to work against itself, clinging to cherished beliefs even when faced with strong contradictory evidence. This phenomenon, often labeled as ‘cognitive bias,’ has long puzzled researchers. A new paper, “On the Variational Costs of Changing Our Minds”, proposes a novel framework suggesting that these biases are not inherent flaws but rather adaptive responses to the significant effort and consequences involved in altering our beliefs.
Authored by David Hyland from the University of Oxford and Mahault Albarracin from VERSES AI Research Lab, the research introduces a formal model that treats belief updating as a ‘motivated variational decision.’ In this model, individuals weigh the perceived ‘utility’ or benefit of holding a particular belief against the ‘informational cost’ required to shift to a new belief state. This cost is quantified using a concept called Kullback-Leibler (KL) divergence, which measures the informational distance between an old belief and a new one.
The Hidden Costs of Changing Your Mind
The traditional Bayesian model of rational belief updating assumes that revising beliefs is a cost-free process, driven purely by probabilistic coherence. However, Hyland and Albarracin argue that real-world belief revision incurs various tangible costs:
- Cognitive Effort: Changing one’s mind requires metabolic energy and cognitive resources. The paper draws parallels to thermodynamics, suggesting that transitions between mental states involve ‘work-like’ and ‘heat-like’ components, with rapid changes being more costly and inefficient.
- Pragmatic Risks: There are real-world consequences to changing beliefs. A scientist retracting a hypothesis, a politician shifting stance, or a public figure admitting error can face professional, social, or personal repercussions.
- Social Costs: Beliefs often serve as social signals and markers of group identity. Revising a core belief can threaten group affiliations, leading to fears of ostracism, loss of status, or ridicule. This ‘identity-protective cognition’ can make individuals publicly defend prior attitudes even when privately questioning them.
These costs, the authors contend, lead to seemingly ‘irrational’ behaviors like confirmation bias (selectively seeking or interpreting information that confirms existing beliefs), motivated reasoning (processing information to achieve desired outcomes), and attitude polarization (groups with opposing views becoming more extreme even when exposed to the same evidence).
A New Model for Belief Change
The proposed framework extends concepts from variational inference and active inference, which are mathematical tools used to model how agents make sense of the world and act within it. The model introduces two key parameters:
- Conservatism Parameter (λ): This determines the relative strength of the cost of belief updating. A high λ means the agent is highly resistant to changing their mind, while a low λ makes them more flexible.
- Likelihood Weighting Parameter (α): This reflects how much the agent desires their final belief distribution to explain the observed data. A higher α means a stronger drive for accuracy.
Through computational experiments, the researchers demonstrated that simple instantiations of this model could qualitatively reproduce common human behaviors. For instance, agents with low conservatism (low λ) were more sensitive to new evidence, while those with high conservatism tended to stick closer to their prior beliefs. The model also showed how agents might selectively choose evidence that confirms their preferences, especially when the cost of updating is low. Furthermore, it illustrated how two agents starting with the same prior beliefs but different affective preferences could diverge in their final beliefs, leading to attitude polarization, particularly when both conservatism and likelihood weighting were low.
Also Read:
- Understanding Limits in AI Alignment: A Capacity-Based Perspective
- Unlocking Efficiency in Bayesian Inference with Information Geometry
Implications and Future Directions
The findings suggest that human biases are not necessarily flaws but rather strategic trade-offs made by a ‘resource-rational’ mind navigating the cognitive and social costs of belief revision. The paper hypothesizes that gradual belief transitions are typically more sustainable than abrupt changes, which could explain resistance to strong contradictory evidence.
The research offers insights for promoting more effective belief updating, suggesting strategies like incremental information exposure and leveraging diverse social networks to reduce perceived social risks. Future work aims to integrate temporal aspects, explore group dynamics more deeply, and conduct further empirical validation to test the model’s predictions in real-world scenarios.
Ultimately, this framework provides a step towards a more holistic understanding of how our motivations, cognitive limitations, and social environments shape the way we change our minds.


