TLDR: Radware, a leading cybersecurity firm, has issued a stark warning regarding the escalating cybersecurity risks posed by AI agents and generative AI in 2025. The company’s experts highlight a surge in sophisticated AI-driven attacks, the emergence of ‘cyber lawfare’ due to AI-generated content, and a critical lack of visibility and governance over AI tools within organizations. A recent survey indicates that 63% of security leaders view employees unknowingly granting AI agents access to sensitive data as the biggest internal threat, with 50% already experiencing AI-related cyber incidents.
Radware, a prominent cybersecurity solutions provider, has sounded the alarm on the profound and rapidly evolving cybersecurity landscape, driven significantly by the proliferation of Artificial Intelligence (AI) agents and generative AI technologies. According to insights from Radware’s 2025 Cyber Threat Report and expert predictions, the coming year will witness an unprecedented surge in sophisticated, AI-powered cyberattacks, alongside the emergence of complex ‘cyber lawfare’ challenges.
AI: A Double-Edged Sword in Cybersecurity
Radware’s CISO, Howard Taylor, emphasizes that while AI has enhanced incident detection and response capabilities for defenders, it has simultaneously ‘opened new doors for cybercriminals.’ In 2025, generative AI tools are projected to be ‘weaponized by bad actors to not only create more realistic and convincing deepfakes, phishing scams, and influence campaigns, but also to launch these attacks more easily and at a larger scale.’ This rapid adoption of AI by malicious actors is also expected to ‘fuel compliance risks that, without proper oversight, can expose companies to legal and financial troubles.’
Travis Volk, Radware’s SVP of Global Service Providers, further warns that Internet Service Providers (ISPs) will face an ‘unprecedented rise in AI-driven attacks that are both more sophisticated and harder to detect.’ He notes that ‘AI is driving down the cost of attacks and escalating their speed and impact.’ Alarmingly, a hacker can acquire an ‘AI key on the dark web for as little as $15’ and execute a large language model (LLM)-assisted attack within minutes. These AI-enhanced attacks are designed to ‘simulate legitimate traffic, making them difficult to distinguish from normal user behavior, even in Zero Trust environments.’
The Rise of ‘Cyber Lawfare’ and Data Risks
Beyond direct attacks, Radware predicts that AI will ‘complicate the legal landscape, sparking ‘cyber lawfare’ and the increasing business threat of fines, lawsuits, and potential imprisonment.’ This is attributed to AI applications potentially ‘unknowingly pull[ing] copyrighted material into AI-generated text.’ Organizations are urged to consider this ‘growing regulatory risk as an additional cost of doing business.’
Adding to these concerns, AI systems themselves are becoming high-value targets. Attackers are ‘manipulating training data to poison AI models, leading to biased outputs and unreliable decision-making,’ with these compromised models even being offered for download in popular open repositories.
Alarming Survey Data Reveals Organizational Vulnerabilities
A recent survey of 200 North American security leaders, conducted by OpinionRoute on behalf of 1Password, underscores the severity of these warnings. The survey, published on August 30, 2025, reveals that nearly two-thirds (63%) of security leaders now perceive their employees unknowingly giving AI agents access to sensitive data as the ‘biggest internal security threat.’ Half (50%) of the surveyed organizations admit to having experienced a confirmed or suspected cyber incident caused by AI or AI agents in the last six months.
Visibility and governance over AI tools remain critically low: only 21% of security leaders report full visibility into all AI tool utilization, and nearly one-third (32%) believe up to 50% of their employees are using unauthorized AI tools. A mere 2.5% of organizations believe they have full visibility into the AI applications and the level of data they can access. Furthermore, 54% describe their enforcement of AI governance policies as weak, and 56% estimate a significant gap (26% to 50%) between governed and unmanaged AI agents.
Dave Lewis, Global Advisory CISO for 1Password, warns that it is ‘only a matter of time before there is a cataclysmic incident involving AI tools, applications and services.’ He highlights that end-users are ‘routinely pasting sensitive data into chat interfaces without reading the fine print,’ leading to this data being used to train AI models and potentially reappearing in unpredictable ways. Lewis also points out that cybercriminals are ‘targeting what will soon be millions of autonomous AI agents that, if compromised, will provide them with an ability to compromise an entire process.’
Other Escalating Threats
The Radware report also details other significant threats, including a ‘staggering 550% increase in web-based DDoS attacks year-over-year’ in 2024, with some attacks peaking at over 16 million requests per second and lasting nearly 10 hours. The ‘commodification of cybercrime,’ through DDoS-for-hire services, is a major driver. API attacks also surged by 41% in 2024, with ‘shadow and zombie APIs’ creating critical blind spots for security teams.
Call to Action for CISOs
Also Read:
- AI-Powered Hexstrike Framework Enables Rapid Zero-Day Exploitation by Cybercriminals
- 2025 Gartner® Magic Quadrantâ„¢ Highlights AI’s Pivotal Role in Evolving Endpoint Security
In response to this volatile threat landscape, Radware urges CISOs to prioritize proactive defense strategies. Key recommendations include adopting ‘AI-driven detection and real-time response strategies’ for DDoS defense, enhancing ‘API Security & Visibility’ to monitor all API traffic, strengthening ‘AI-Aware Cybersecurity’ against AI-generated threats, and integrating ‘Threat Intelligence’ for real-time insights. The message is clear: organizations must adapt and stay ahead, as ‘the future of cybersecurity isn’t just about keeping up—it’s about staying ahead.’


