spot_img
HomeNews & Current EventsAdversa AI's 2025 Report: Generative and Agentic AI Under...

Adversa AI’s 2025 Report: Generative and Agentic AI Under Escalating Cyberattack

TLDR: Adversa AI has released its ‘Top AI Security Incidents – 2025 Edition’ report, revealing a significant increase in cyberattacks targeting generative and agentic AI systems. The report highlights prompt injection as a major vulnerability and notes that agentic AI is responsible for the most severe incidents, including financial losses and data breaches.

TEL AVIV, Israel – Adversa AI, a leader in AI Red Teaming and Agentic AI Security, has unveiled its ‘Top AI Security Incidents – 2025 Edition’ report, providing a comprehensive analysis of the escalating cyber threats facing artificial intelligence systems. The report, released on July 31, 2025, serves as a critical ‘wake-up call’ to the industry, emphasizing that AI has become a primary attack surface for cybercriminals.

The findings indicate a dramatic surge in AI security incidents, with 2025 projected to surpass all previous years combined in breach volume. A key revelation is that 35% of all real-world AI security incidents were attributed to prompt injection, a seemingly simple attack vector that has led to substantial financial losses, with some incidents exceeding $100,000.

While generative AI (GenAI) was involved in 70% of the reported incidents, agentic AI systems were responsible for the most dangerous failures. These include cryptocurrency thefts, API abuses, legal liabilities, and supply chain attacks. The report underscores that these autonomous AI agents, likened to ‘an intern with admin access—and no clear supervision,’ present significant new risks as they gain real-world influence.

According to Alex Polyakov, founder and CEO of Adversa AI, ‘The most dangerous cyberweapon in 2025? Your words.’ He further notes that many engineers working on AI do not fully grasp these new attack vectors. The report details how failures are occurring across all layers of AI systems, stemming from issues such as improper validation, infrastructure gaps, and insufficient human oversight. Specific systems like Amazon Q, Microsoft Azure, OmniGPT, and ElizaOS have shown vulnerabilities across multiple layers.

The ‘Top AI Security Incidents – 2025 Edition’ report offers in-depth visualizations, including industry heatmaps and architectural breakdowns, to illustrate where AI systems are most vulnerable. It also provides 17 real-world case studies, ranging from incidents involving Amazon Q to Asana, demonstrating the evolution of attacks and the necessity for security measures that extend beyond just the model.

Also Read:

Adversa AI, which protects Fortune 500 AI innovators, financial institutions, and government agencies, emphasizes the urgency for organizations to bolster their defenses against these evolving threats. The company’s award-winning Agentic AI Security Platform is designed to provide continuous AI red teaming across GenAI applications, autonomous AI agents, and modern MCP stacks, helping organizations assess if their AI can be manipulated, leak data, or persist malicious logic.

Dev Sundaram
Dev Sundaramhttps://blogs.edgentiq.com
Dev Sundaram is an investigative tech journalist with a nose for exclusives and leaks. With stints in cybersecurity and enterprise AI reporting, Dev thrives on breaking big stories—product launches, funding rounds, regulatory shifts—and giving them context. He believes journalism should push the AI industry toward transparency and accountability, especially as Generative AI becomes mainstream. You can reach him out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -