spot_img
HomeResearch & DevelopmentConsumer Trust and AI: Why Hiding AI Use Damages...

Consumer Trust and AI: Why Hiding AI Use Damages Brands More Than Exaggeration

TLDR: A new study reveals that consumers react more negatively to companies that hide or understate their AI use (deceptive denial) than to those that exaggerate it (deceptive boasting). This asymmetry is driven by a greater sense of perceived betrayal when AI usage is concealed, highlighting the critical importance of transparency in AI communications.

As artificial intelligence (AI) continues to dominate headlines and business strategies, companies face increasing pressure to communicate their AI initiatives. However, this pressure can sometimes lead to practices known as “AI-washing,” where companies either overstate or understate their actual use of AI. A recent study delves into how these two types of AI-washing affect consumer perceptions and trust.

Inspired by the concept of “greenwashing,” AI-washing refers to deceptive corporate behaviors where organizations either exaggerate (deceptive boasting) or downplay (deceptive denial) their AI-based operations. For instance, a company might claim to use AI to “co-create” a new product without clear evidence, or a healthcare insurer might deny using AI in claims decisions while secretly employing AI-powered tools.

Two Faces of AI-Washing: Boasting vs. Denial

The research identifies two distinct types of AI-washing:

  • Deceptive Boasting: This occurs when a company dishonestly overstates or exaggerates its AI use, claiming to use AI in products or services where it does not.
  • Deceptive Denial: This is the dishonest practice of understating or not disclosing AI use when companies are, in fact, using AI in their products or services.

To understand consumer reactions, researchers conducted an experiment with 401 participants, simulating scenarios involving a fictitious health insurer, PWXL. Participants were presented with situations where the company either claimed to use AI or not, and later learned the actual truth about AI usage.

The Asymmetric Impact on Consumer Judgment

The study’s findings reveal a significant asymmetry in how consumers react to these two forms of AI-washing. The results indicate that deceptive denial evokes much more negative moral judgments from consumers compared to honest negation (when a company truthfully states it doesn’t use AI). In contrast, deceptive boasting had no significant negative effect on consumer attitudes or purchase intentions when compared to honest affirmation (when a company truthfully states it uses AI).

This means consumers are far more critical of companies that hide or downplay their AI use than those that exaggerate it. Why this difference? The research points to “perceived betrayal” as the mediating factor. Consumers feel a greater sense of betrayal when companies engage in deceptive denial.

One possible explanation for this asymmetry is the current market environment. With the widespread hype around AI, consumers may have come to expect a certain level of exaggeration from brands. However, when a company actively conceals its AI use, it might be perceived as having more nefarious intentions, such as covering up problematic AI applications (e.g., AI used to deny insurance claims), leading to a stronger sense of distrust and betrayal.

Also Read:

Implications for Businesses and Policymakers

These findings carry important implications for various stakeholders:

  • For Businesses: Companies should be cautious about hiding or understating their AI use. While overstating AI might not significantly harm consumer trust, concealing it can lead to severe negative moral judgments, unfavorable attitudes, and reduced purchase intentions. Prioritizing transparency, even when AI use might be unpopular, is crucial to maintaining consumer trust.
  • For Policymakers and Regulators: The study suggests that current regulations, like those from the Federal Trade Commission (FTC), which primarily focus on misleading claims and hype, should expand their definition of AI-washing to include denials and understatements of AI use. Regulators might consider requiring firms to disclose AI use, especially in sensitive sectors like insurance, finance, banking, healthcare, and employment, where AI harm can be significant.
  • For Researchers: The study introduces a broader understanding of AI-washing into business ethics literature, highlighting the need for ethical frameworks that incorporate expanded definitions of transparency and consumer agency in the context of AI.

While this study focused on the healthcare insurance sector, its robustness checks suggest the findings may apply to other industries. Future research could explore these dynamics across different product categories and cultural contexts, further refining our understanding of this evolving ethical challenge. For more details, you can read the full research paper here.

Rhea Bhattacharya
Rhea Bhattacharyahttps://blogs.edgentiq.com
Rhea Bhattacharya is an AI correspondent with a keen eye for cultural, social, and ethical trends in Generative AI. With a background in sociology and digital ethics, she delivers high-context stories that explore the intersection of AI with everyday lives, governance, and global equity. Her news coverage is analytical, human-centric, and always ahead of the curve. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -