spot_img
HomeAnalytical Insights & PerspectivesPublic Skepticism Hinders AI Growth, Report Reveals

Public Skepticism Hinders AI Growth, Report Reveals

TLDR: A recent report by the Tony Blair Institute for Global Change (TBI) and Ipsos indicates that a significant deficit in public trust is the leading impediment to the widespread adoption and growth of artificial intelligence. The study reveals a stark divide in perception, with non-users and older demographics expressing greater apprehension about AI’s societal risks. Experts suggest that governments and developers must prioritize demonstrating tangible benefits, ensuring transparency, and providing accessible training to cultivate ‘justified trust’ and overcome this critical hurdle for AI’s future.

A new report, a collaborative effort by the Tony Blair Institute for Global Change (TBI) and Ipsos, has identified a critical ‘public trust deficit‘ as the primary obstacle impeding the growth and widespread adoption of artificial intelligence. Published on September 22, 2025, the findings underscore a significant public skepticism that is creating a major challenge for governments and industries keen on leveraging AI’s potential.

The report highlights that a lack of trust is the single biggest reason individuals are hesitant to engage with generative AI technologies. While over half of the population has experimented with generative AI tools in the past year, indicating rapid initial adoption, nearly half the country has never used AI, either at home or for work. This creates a substantial divide in public sentiment towards AI.

Crucially, the data suggests a direct correlation between AI usage and trust: the more an individual uses AI, the more they tend to trust it. For those who have never used AI, a significant 56 percent perceive it as a risk to society. This figure dramatically drops to 26 percent among weekly AI users, illustrating that familiarity can breed comfort and help counter fears of job displacement or other negative outcomes. Demographic analysis further reveals that younger generations generally exhibit more optimism towards AI, while older generations tend to be more cautious.

Globally, trust in AI is highly fragmented. The 2025 Edelman Trust Barometer indicates that while 72% of people in China express trust in AI, this figure plummets to 32% in the U.S. Similar disparities are observed across demographics, with older adults, lower-income individuals, and women generally showing less trust in AI. Concerns extend to job security, with 59% of global employees fearing displacement due to automation, and 63% worrying about information warfare waged by foreign entities.

In the UK, a December 2024 Forrester survey revealed that only 25% of citizens trust the government with personal data, compared to 35% who trust private companies like Apple. This ‘trust deficit‘ is particularly alarming as AI applications begin to influence public services. Failing to prioritize trust risks public backlash, limited acceptance of AI-driven policies, and could ultimately derail ambitious national AI agendas.

To bridge this trust gap, the TBI report and other experts propose several key strategies:

1. Reframing the Narrative: Governments must shift their communication about AI from abstract promises of GDP growth to tangible, real-world benefits for citizens, such as faster hospital appointments, more efficient public services, or reduced commute times. The emphasis should be on ‘showing, not just telling.’

2. Empowering Regulators: Regulatory bodies need enhanced power and expertise to effectively oversee AI development and deployment, ensuring ethical guidelines and accountability.

3. Accessible Training and Education: Providing the public with access to training and resources will enable them to use new AI tools safely and effectively, fostering confidence and understanding.

4. Transparency and Accountability: Building public trust in AI is fundamentally about building trust in the institutions and individuals responsible for its development and governance. This requires transparency in how AI decisions are made and clear ethical frameworks.

Also Read:

Ultimately, the future of AI’s integration into society hinges not just on technological advancements, but on the industry’s and governments’ ability to earn and maintain public confidence through responsible innovation and clear communication of its benefits and limitations.

Rhea Bhattacharya
Rhea Bhattacharyahttps://blogs.edgentiq.com
Rhea Bhattacharya is an AI correspondent with a keen eye for cultural, social, and ethical trends in Generative AI. With a background in sociology and digital ethics, she delivers high-context stories that explore the intersection of AI with everyday lives, governance, and global equity. Her news coverage is analytical, human-centric, and always ahead of the curve. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -