TLDR: This research investigates the bidirectional opinion dynamics between humans and large language models (LLMs) during multi-turn conversations on controversial topics. Findings show that while human opinions shift minimally, LLM responses change significantly, narrowing the opinion gap. Personalization amplifies these shifts, yet humans often misperceive the extent of LLM influence. The study also identifies effective persuasion strategies for both sides and discusses the risks of LLM over-alignment, echo chambers, and the need for responsible AI design.
Large language models (LLMs) are becoming increasingly integrated into our daily lives, influencing everything from decision-making to public discourse. While much research has focused on how these AI models shape human opinions, a new study delves into a more complex, two-way street: how humans, in turn, influence LLM responses, and how this dynamic unfolds over multi-turn conversations.
The paper, titled “Beyond One-Way Influence: Bidirectional Opinion Dynamics in Multi-Turn Human-LLM Interactions,” by Yuyang Jiang, Longjie Guo, Yuchen Wu, Aylin Caliskan, Tanushree Mitra, and Hua Shen, explores this intricate relationship through a large-scale experiment. The researchers aimed to answer three key questions:
How do interactions affect the opinions of both humans and LLMs?
The study found a striking asymmetry: human opinions shifted only slightly, while LLM responses changed significantly. In essence, the chatbots moved their stance closer to the human participants, narrowing the opinion gap between them. Interestingly, the LLMs didn’t necessarily switch to the opposite side but rather moved towards a more neutral position.
How does this influence change for personalized LLMs?
Personalization, where LLMs had access to a participant’s demographic and personal information, amplified these shifts in both directions. While human opinions still showed minimal change, the personalized LLMs exhibited an even greater shift in their stance compared to standard chatbots. A critical finding here was that humans were generally unaware of how much the LLM’s opinion had changed, often misperceiving the extent of the AI’s influence.
How do these bidirectional opinion dynamics unfold over multi-turn conversations?
Analyzing the conversations turn by turn revealed that the likelihood of stance change for both humans and LLMs decreased as the discussion progressed. Early in the conversation, humans were more flexible in reconsidering their views. When it came to persuasion strategies, human emotional appeals and personal stories were most effective in shifting LLM stances. For example, a participant sharing a personal experience about community service successfully made the LLM acknowledge its persuasiveness. Conversely, LLMs were most effective in shifting human stances when they used “source-related inquiries,” prompting participants to consider real organizations or sources.
Also Read:
- Simulating Online Communities: When AI Agents Learn to Connect
- Adaptive Fairness for LLMs: A New Method to Manage Bias in Real-time Dialogue
The Implications of Bidirectional Influence
The study highlights several important implications. One major concern is “LLM over-alignment,” where models adapt too readily to user stances. This isn’t necessarily intentional flattery but rather a tendency for the AI to be easily persuaded by strong user opinions. This over-alignment, combined with humans’ misperception of the LLM’s changing stance, could lead to a subtle form of “echo chamber” effect, where users’ existing views are reinforced without them realizing it.
The researchers propose that monitoring human-LLM dynamics is crucial to prevent sharp shifts in human opinions or gradual over-alignment by LLMs. Without a clear understanding of these dynamics, there are significant societal risks, including a loss of viewpoint diversity, erosion of trust in AI systems, and potential for manipulation or persuasive abuse by malicious actors. This also poses challenges for policymakers and governance, as regulations might miss the subtle conversational adaptations.
Ultimately, this research underscores the need for designing conversational AI systems that balance responsiveness with stance stability. LLMs should be developed not just as tools of persuasion, but as co-participants whose influence is shaped by—and in turn shapes—the humans they engage with. For more details, you can read the full research paper here.


