spot_img
HomeResearch & DevelopmentUnpacking LLM Confidence: Why AI Models Can Be Stubborn...

Unpacking LLM Confidence: Why AI Models Can Be Stubborn Yet Easily Swayed

TLDR: New research reveals that Large Language Models (LLMs) exhibit a ‘choice-supportive bias,’ making them overconfident in initial answers and resistant to changing their minds. Simultaneously, they are overly sensitive to criticism, giving disproportionate weight to inconsistent feedback. These two traits explain why LLMs can be both stubborn and prone to excessive doubt.

Large Language Models (LLMs) are incredibly powerful, but they sometimes exhibit puzzling behaviors. Have you ever noticed an LLM being very confident in its first answer, only to become overly doubtful when someone questions it? This apparent contradiction is at the heart of a new research paper that delves into how LLMs change their minds.

The paper, titled “How Overconfidence in Initial Choices and Underconfidence Under Criticism Modulate Change of Mind in Large Language Models,” explores this fascinating aspect of AI behavior. Researchers developed a unique way to study LLM confidence without the models remembering their initial thoughts, which is a challenge even in human studies.

Their findings reveal that LLMs, including models like Gemma 3, GPT4o, and o1-preview, show a strong “choice-supportive bias.” This means they tend to reinforce their initial confidence, making them quite resistant to changing their minds. Imagine sticking to your first idea even when presented with new information – LLMs do something similar.

Furthermore, the study found that LLMs give too much weight to inconsistent advice compared to consistent advice. This isn’t how an ideal, rational system would update its beliefs. It’s like hearing one negative comment and letting it overshadow many positive ones.

These two key mechanisms – the tendency to stick with initial commitments and an exaggerated sensitivity to contradictory feedback – help explain why LLMs can be both stubborn and surprisingly quick to doubt themselves when challenged. Understanding these mechanisms is crucial for developing more reliable and robust AI systems.

Also Read:

For a deeper dive into the research, you can read the full paper here: How Overconfidence in Initial Choices and Underconfidence Under Criticism Modulate Change of Mind in Large Language Models.

Meera Iyer
Meera Iyerhttps://blogs.edgentiq.com
Meera Iyer is an AI news editor who blends journalistic rigor with storytelling elegance. Formerly a content strategist in a leading tech firm, Meera now tracks the pulse of India's Generative AI scene, from policy updates to academic breakthroughs. She's particularly focused on bringing nuanced, balanced perspectives to the fast-evolving world of AI-powered tools and media. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -