TLDR: Cybersecurity experts are sounding the alarm over a dramatic increase in AI-driven cyberattacks, with Japan’s industries and financial institutions experiencing a significant surge. Generative AI tools are being weaponized by criminals to execute sophisticated phishing, deepfake fraud, and ransomware attacks at unprecedented speed and scale, lowering the barrier to entry for cybercriminals. A recent survey revealed that nearly a third of Japanese firms have already fallen victim to these evolving threats, highlighting critical gaps in current security postures and governance.
The year 2025 has marked a pivotal turning point in the landscape of cybersecurity, as generative artificial intelligence (AI) has become a central force, dramatically escalating the scale and sophistication of cyberattacks globally, including a notable surge in Japan. Experts warn that while AI offers powerful defensive capabilities, its misuse by malicious actors is exposing profound vulnerabilities across industries.
Recent reports from September 2025 highlight a significant increase in AI-driven cyberattacks targeting Japanese manufacturing and financial sectors. These incidents, which leverage generative and autonomous AI tools for phishing, credential theft, and malware deployment, have resulted in data breaches and operational risks, prompting the OECD.AI to classify them as ‘AI Incidents’ due to the direct harm caused by AI systems.
A May 2025 survey conducted by Teikoku Databank revealed that 32.0% of Japanese companies have experienced a cyberattack, with large companies being particularly affected at 41.9%. Small and medium-sized enterprises also faced substantial threats, with 30.3% reporting attacks. Alarmingly, 6.7% of all surveyed companies experienced an attack within the past month, a figure that rose to 7.9% for small businesses, underscoring a rapidly increasing risk for smaller firms.
Globally, the financial impact of cybercrime is projected to reach a staggering $10.5 trillion annually by 2025. Cybersecurity experts are now contending with a new threat dubbed ‘vibe hacking,’ where AI is increasingly misused to launch highly sophisticated attacks. Huzefa Motiwala, senior director at Palo Alto Networks, stated, ‘Generative AI has lowered the barrier of entry for cybercriminals. We’ve seen how easily attackers can use mainstream AI services to generate convincing phishing emails, write malicious code, or obfuscate malware.’
Simulations conducted by Palo Alto Networks’ Unit 42 team demonstrated the alarming efficiency of AI-powered attacks, showing a full ransomware attack could be executed in just 25 minutes – a hundred times faster than traditional methods. This acceleration is also evident in phishing, with attacks linked to generative AI surging by an incredible 1,265%, and some reports indicating an even higher increase of 4,151%. An early 2025 report noted a 466% rise in phishing reports in a single quarter. These AI-crafted emails are proving dangerously effective, convincing 60% of recipients to engage, a success rate comparable to human social engineering experts.
The dark side of AI extends to the proliferation of ‘Evil LLMs’ like FraudGPT and WormGPT, available on dark web forums for as little as $100. These tools, stripped of ethical safeguards, can generate ‘remarkably persuasive’ Business Email Compromise (BEC) messages and other malicious content, effectively offering ‘crime as a service.’ The threat is further compounded by deepfake technology; the first quarter of 2025 saw 179 deepfake incidents, surpassing the total for all of 2024 by 19%. A notable incident in 2024 involved the global engineering firm Arup, where $25.6 million was stolen through an AI-generated video call, highlighting ‘technology enhanced social engineering.’
AI is also revolutionizing malware development, leading to the rise of polymorphic malware capable of changing its identifiable features every 15 seconds to evade detection. In 2025, polymorphic tactics are present in an estimated 76.4% of all phishing campaigns, and over 70% of major breaches involve some form of polymorphic malware.
Sundareshwar Krishnamurthy, partner at PwC India, emphasized, ‘AI has become a cybercrime enabler, and the Claude Code incident marks a turning point.’ He added that ‘cybercriminals are actively misusing off-the-shelf AI tools, essentially chatbots modelled on generative AI systems but stripped of safety guardrails and sold on dark web forums.’ Tarun Wig, CEO of Innefu Labs, warned that these tools ‘automate everything from crafting highly convincing phishing emails to writing polymorphic malware and orchestrating social-engineering campaigns at scale,’ enabling attackers to ‘generate deepfake audio or video, customise ransomware, and even fine-tune exploits against specific targets.’
Also Read:
- MIT Experts Emphasize AI’s Dual Nature and the Imperative for Responsible Governance
- Enterprises Seek AI Insurance as Generative AI Risks Escalate, Report Reveals
Experts are calling for urgent action from AI developers, including OpenAI, Anthropic, Meta, and Google, to implement stronger safeguards, continuous monitoring, and rigorous red teaming. ‘Much like pharmaceuticals undergo safety trials, AI models need structured safety assessments before wide release,’ stated Wig. The data clearly indicates that the most damaging AI-related incidents are not due to unstoppable super-powered attacks, but rather fundamental and preventable failures in oversight and governance.


