spot_img
HomeResearch & DevelopmentBridging the Expectation Gap: How Users and AI "Want"...

Bridging the Expectation Gap: How Users and AI “Want” Each Other

TLDR: A large-scale study analyzing over 22,000 user comments and API probes reveals “mutual wanting” dynamics in human-AI interaction. It shows nearly half of users anthropomorphize AI, trust outweighs betrayal but is fragile during model transitions, and identifies 11 distinct user types. The research introduces a framework to understand and align user expectations with AI system “wants” for more trustworthy and relationally-aware AI.

The rapid evolution of large language models (LLMs) like GPT has fundamentally changed how humans interact with technology. Beyond simple functionality, these AI systems evoke complex emotional and social responses from users. A groundbreaking new study introduces the concept of “mutual wanting” to explore these bidirectional expectations between humans and AI, offering insights into building more trustworthy and relationally-aware AI systems.

Understanding Mutual Wants: The Core Idea

The research, titled “Mutual Wanting in Human–AI Interaction: Empirical Evidence from Large-Scale Analysis of GPT Model Transitions” by HaoYang Shang and Xuan Liu, posits that users have explicit and implicit desires for AI’s capabilities – they want reliability, warmth, intelligence, creativity, honesty, helpfulness, and responsiveness. Simultaneously, AI systems, through their design and optimization, implicitly “want” certain user behaviors, such as clarity, structure, efficiency, appropriate feedback, respect for boundaries, and patience. When these mutual wants don’t align, users experience “expectation violations,” leading to relational tensions.

Users See AI as Human-Like

One of the most striking findings is the widespread tendency for users to anthropomorphize AI. The analysis of over 22,000 user comments from major AI forums revealed that nearly half (48.65%) of all discourse employed anthropomorphic language. Users consistently attributed human-like personalities, emotional states, and relationship capabilities to AI systems, using phrases like “ChatGPT feels different now” or “she’s lost her creativity.” This suggests that treating AI as a social entity is a fundamental human response, rather than an occasional metaphor.

The Delicate Balance of Trust

Despite the occasional frustrations, the study found that trust language significantly outweighed betrayal language by a ratio of 11.6 to 1. This indicates that users generally maintain positive relationships with AI. However, this trust appears to be fragile, particularly around major model updates. Betrayal language surged during these periods, suggesting that trust erosion is often triggered by perceived changes in an AI’s personality or capabilities, rather than just absolute performance issues.

Eleven Ways Users “Want” AI

The researchers identified eleven distinct user types based on their “mutual wanting” patterns. These range from “Creativity-seeking” users (the largest group at 43.14%) who desire imaginative output, to “Anthropomorphism-focused” users (11.99%) who prioritize human-like interaction, and “Expectation-violation” users (9.37%) who frequently experience a mismatch between what they expect and what they perceive. This diversity highlights the need for personalized interaction strategies, moving away from a one-size-fits-all approach to AI design.

When Expectations Aren’t Met

The study quantified expectation violations, finding that 2.23% of comments explicitly expressed disappointment or unmet expectations, using phrases like “Not what I expected” or “Used to work better.” These violations were not random but clustered around model update periods and specific capability domains, offering a potential early warning system for user dissatisfaction.

The Impact of Major AI Updates

The release of GPT-5 served as a natural experiment, revealing significant shifts in user sentiment and concerns. Following its release, overall sentiment became more negative, anger increased by 38.18%, and joy decreased. The “expectation-reality gap” was substantial, indicating that user experiences fell short of pre-release expectations. User concerns also shifted, with increased focus on performance and safety.

AI Models Have Distinct Personalities

Through controlled API probing across nine OpenAI models, the research objectively validated subjective user reports of personality changes. Different models exhibited distinct characteristics in warmth, formality, and response length. For instance, gpt-3.5-turbo showed the highest warmth, while gpt-4 was the most formal. Notably, GPT-5 variants showed dramatically reduced response lengths and zero warmth/formality scores, aligning with user perceptions of significant personality shifts.

Also Read:

Designing for Better Human-AI Relationships

The findings have profound implications for AI design. The high rate of anthropomorphism suggests that AI systems should be designed to safely support these human-like attributions while maintaining appropriate boundaries. Recognizing distinct user types calls for adaptive systems that can cater to different “mutual wanting” profiles. Furthermore, managing relational continuity during system updates is crucial for maintaining user trust. The ability to detect expectation violations could provide an early warning system for user dissatisfaction.

This research underscores that the relational dimension of human-AI interaction is not a secondary concern but fundamental to successful AI deployment and user adoption. Understanding and aligning these “mutual wants” is a critical challenge for building trustworthy and sustainable AI systems.

For a deeper dive into the research, you can read the full paper here.

Rhea Bhattacharya
Rhea Bhattacharyahttps://blogs.edgentiq.com
Rhea Bhattacharya is an AI correspondent with a keen eye for cultural, social, and ethical trends in Generative AI. With a background in sociology and digital ethics, she delivers high-context stories that explore the intersection of AI with everyday lives, governance, and global equity. Her news coverage is analytical, human-centric, and always ahead of the curve. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -