TLDR: Recent reports highlight that generative artificial intelligence is significantly contributing to an increase in phishing attacks, making them more sophisticated and harder to detect. While one report points to a 3.3% rise, other analyses suggest a much more dramatic surge in AI-driven phishing volumes and success rates, posing a growing threat to individuals and organizations.
Generative artificial intelligence (AI) is increasingly being leveraged by cybercriminals, leading to a notable surge in phishing attacks, according to various cybersecurity reports from late 2024 and 2025. One recent report indicates that generative AI has played a role in a 3.3% rise in phishing incidents, underscoring the evolving landscape of cyber threats.
However, other analyses paint an even more alarming picture. SentinelOne, for instance, reported a staggering 1,265% increase in phishing attacks driven by generative AI over the past year, a trend observed shortly after large language models (LLMs) like ChatGPT became publicly accessible. This dramatic escalation is transforming phishing from a volume-based endeavor into one of precision and personalization, making attacks significantly more effective.
AI-written phishing emails are proving to be as effective, if not more so, than those crafted by humans. Research cited in industry sources, including Harvard, found that approximately 60% of recipients fall for AI-generated phishing emails, a success rate comparable to traditional attacks. Another 2025 paper noted that AI-based phishing scams boast a click-through rate of around 54%, significantly outperforming manually produced phishing content, which typically sees a 12% click-through rate.
The weaponization of generative AI allows threat actors to craft highly convincing and personalized messages at unprecedented speed and scale. These AI tools can aggregate information about targets from public sources like LinkedIn and company websites, building detailed ‘data dossiers’ to create tailored lures with perfect grammar and style. The U.S. FBI has officially warned that criminals are ‘leveraging AI to orchestrate highly targeted phishing campaigns,’ producing messages customized to individual recipients.
Phishing remains the leading vector for cyber breaches, initiating the majority of security incidents across industries. In 2025, it continues to be the number one cyber threat, with an estimated 3.4 billion malicious emails flooding inboxes daily. The average cost of a phishing attack for corporations has risen to $4.91 million, factoring in financial loss, downtime, and reputational damage. The human element remains a critical vulnerability, contributing to 60% of all breaches, as per the Verizon DBIR 2025.
Beyond emails, AI is also revolutionizing other forms of social engineering. Deepfake voice phishing, for example, has driven over $20 million in losses, particularly targeting crypto executives. The number of deepfake incidents in Q1 2025 alone surpassed the total for all of 2024, with a 19% increase. Cybersecurity leaders express significant concern, with 63% apprehensive about AI’s role in deepfake scams, and 99.9% of people unable to reliably identify deepfakes.
Also Read:
- Radware Warns: AI Agents Fueling Unprecedented Cybersecurity Threats and ‘Cyber Lawfare’ in 2025
- AI-Powered Hexstrike Framework Enables Rapid Zero-Day Exploitation by Cybercriminals
Experts warn that simply improving existing security filters will not suffice, as attackers are finding new ways to exploit AI tools to bypass legacy defenses. The consensus among many business and cyber leaders is that AI will benefit cybercriminals more than cybersecurity teams, with 56% believing criminals will have an advantage thanks to generative AI. This necessitates a rethinking of security approaches, moving towards AI-native email security platforms that combine LLM-powered detection, deep behavior analysis, and automated threat simulations to counter these advanced threats.


