TLDR: This research introduces and validates the Human–AI Trust Scale (HAITS), a new tool to measure trust in generative AI (GenAI) by considering both rational and relational aspects. It identifies four key dimensions: Affective Trust, Competence Trust, Benevolence & Integrity, and Perceived Risk. The study also reveals six distinct user trust profiles, from “Full-Spectrum Distrusters” to “Full-Spectrum Low-Risk Trusters,” highlighting the complex and diverse ways people interact with and trust GenAI, with notable cultural differences between China and the U.S.
Understanding how people trust artificial intelligence, especially the new generation of generative AI (GenAI) systems, is crucial as these technologies become more integrated into our daily lives. Unlike older AI, GenAI systems don’t just process information; they can converse, respond, and collaborate, making the line between a tool and a partner increasingly blurry. Traditional ways of measuring trust in AI often focused only on its functionality, like reliability and accuracy, overlooking the important social and emotional aspects that are now highly relevant.
To address this gap, researchers Haocan Sun, Weizi Liu, Di Wu, Guoming Yu, and Mike Yao introduced and validated a new measurement tool called the Human–AI Trust Scale (HAITS). This scale is specifically designed to capture both the rational (instrumental) and relational dimensions of trust in GenAI. Their work involved extensive research, including drawing on existing trust theories, conducting qualitative interviews, and analyzing two large-scale surveys in China and the United States.
The Four Pillars of Human–AI Trust
Through their analysis, the researchers identified four key dimensions that make up human–AI trust in the GenAI era:
- Affective Trust: This dimension relates to the emotional connection and feelings users have towards AI, such as feeling the AI is a part of them or a close friend, and using it for emotional support or relaxation.
- Competence Trust: This focuses on the AI’s capabilities and performance, assessing whether it is reliable, effective, efficient, and provides accurate information.
- Benevolence & Integrity: This dimension reflects the belief that the AI has good intentions and will act ethically, not knowingly doing harm or betraying confidence. It captures a sense of institutional trust.
- Perceived Risk: This dimension represents distrust, encompassing concerns about the AI being deceptive, behaving in an underhanded manner, or having potentially harmful outcomes.
These dimensions highlight that trust in GenAI is not a simple, single concept but a complex interplay of how we perceive its abilities, its intentions, our emotional connection to it, and the potential risks involved. The HAITS scale was rigorously tested and found to be reliable and applicable across different cultural and gender groups.
Also Read:
- Navigating Mental Health AI: A Framework for Safer Disclosure and Enhanced User Understanding
- Rethinking AI Ethics: Why Current Evaluation Methods Fall Short in Measuring Systemic Harms
Unveiling Diverse Trust Profiles
Beyond identifying the dimensions of trust, the study also explored how these dimensions combine in different individuals. Using a technique called latent profile analysis, the researchers classified users into six distinct trust profiles, revealing that people don’t all trust AI in the same way:
- Moderate Trusters (24.7%): These users show a balanced, moderate level across all four trust dimensions.
- Full-Spectrum Distrusters (4.6%): This group exhibits low affective and competence trust, high perceived risk, and low benevolence and integrity. They generally distrust GenAI across the board.
- Uncertain Distrusters (7.4%): These individuals have moderate competence trust but low institutional and affective trust, suggesting they are still in an early stage of evaluating AI, feeling ambivalent rather than outright rejecting it.
- Full-Spectrum Low-Risk Trusters (53.7%): This is the largest group, showing uniformly high scores across institutional trust, technology-specific trust (affective and competence), and low perceived risk.
- Rational Trusters (6.0%): These users have high confidence in the AI’s competence and institutional trust but remain emotionally distant. Their trust is based on a logical assessment of utility and low perceived risk.
- Full-Spectrum High-Risk Trusters (3.7%): This profile shows strong affective trust, competence trust, and institutional trust, but surprisingly, also retains a high level of perceived risk. This suggests a complex state where users might trust the AI’s capabilities and even feel an emotional bond, yet remain wary of potential downsides.
The study also found significant cross-national differences, with Chinese users more concentrated in high-trust groups, particularly the Full-Spectrum Low-Risk Trusters, compared to U.S. participants. This suggests that cultural and institutional environments play a role in shaping how trust in AI is distributed across populations.
In conclusion, this research provides a robust framework and a validated tool for measuring human–AI trust in the age of generative AI. It moves beyond simple functional assessments to integrate emotional, ethical, and risk perceptions, offering valuable insights for designing more trustworthy AI systems and understanding how trust evolves in human–AI interactions. You can read the full paper here.


