spot_img
HomeNews & Current EventsAI Safety Advocates Accuse Google of Breaching Transparency Pledges...

AI Safety Advocates Accuse Google of Breaching Transparency Pledges for Gemini 2.5 Pro

TLDR: Protesters, led by the activist group PauseAI, have accused Google of failing to uphold its public commitments regarding AI safety, specifically concerning the transparency and external evaluation of its Gemini 2.5 Pro model. The group staged a mock trial outside Google DeepMind’s London headquarters, alleging violations of pledges made at the 2024 AI Safety Summit in Seoul.

Dozens of protesters, organized by the activist group PauseAI, staged a mock courtroom trial outside Google DeepMind’s London headquarters on Monday, July 28, 2025. The demonstration accused the AI giant of breaking public safety promises made during the launch of its Gemini 2.5 Pro model. Over 60 participants chanted “Test, don’t guess” and “Stop the race, it’s unsafe” while conducting a theatrical trial complete with a judge and jury.

PauseAI claims that Google violated commitments made at the 2024 AI Safety Summit in Seoul, where the company pledged to involve external evaluators in testing its advanced AI models and publish detailed transparency reports. However, when Google released Gemini 2.5 Pro in April, it labeled the model “experimental” and initially provided no third-party evaluation details. A safety report published weeks later was criticized by experts as lacking substance and failing to identify external reviewers.

Ella Hughes, organizing director of PauseAI, addressed the crowd, stating, “Right now, AI companies are less regulated than sandwich shops. If we let Google get away with breaking their word, it sends a signal to all other labs that safety promises aren’t important.” This protest reflects growing public concern about the pace of AI development and the lack of oversight.

Joep Meindertsma, founder of PauseAI, who also runs a software company and utilizes AI tools from major providers, explained that the group chose to focus on this specific transparency issue as an achievable near-term goal. Monday’s demonstration marked PauseAI’s first targeted protest concerning this particular Google commitment. The group is now engaging with UK Parliament members to escalate their concerns through political channels.

Also Read:

Google has not yet responded to requests for comment regarding the protesters’ demands or future transparency plans for its AI models. Joseph Miller, Director of PauseAI UK, further elaborated on Google DeepMind’s failure to adhere to the Frontier AI Safety Commitments, emphasizing the need for greater transparency and regulatory powers for the UK AI security institute. He noted that Google has not responded to journalists’ inquiries about the UK AI Safety Institute’s involvement in testing Gemini 2.5 Pro, asserting that the company “violated the AI safety commitments.”

Rhea Bhattacharya
Rhea Bhattacharyahttps://blogs.edgentiq.com
Rhea Bhattacharya is an AI correspondent with a keen eye for cultural, social, and ethical trends in Generative AI. With a background in sociology and digital ethics, she delivers high-context stories that explore the intersection of AI with everyday lives, governance, and global equity. Her news coverage is analytical, human-centric, and always ahead of the curve. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -