TLDR: South Korea is experiencing a dramatic increase in voice phishing scams, significantly amplified by generative AI technology. Financial damages reached ₩642.1 billion ($470 million) in the first half of 2025, with authorities warning that annual losses could exceed ₩1 trillion ($730 million) for the first time. Generative AI enables scammers to create highly realistic voice and video clones, making deception nearly undetectable, even to close family members.
Voice phishing scams in South Korea have escalated dramatically, with generative artificial intelligence (AI) identified as a primary catalyst for their increased sophistication and reach. Data compiled on July 15 indicates that financial damages from these scams reached ₩642.1 billion (approximately $470 million) in the first half of 2025 alone. This figure follows a record ₩854.5 billion ($615 million) in total losses for the entirety of 2024. Authorities are now issuing stark warnings, projecting that annual damages could surpass ₩1 trillion ($730 million) for the first time ever.
The average loss per victim has also seen a significant jump, crossing the ₩40 million ($29,000) mark in 2024, representing a 73 percent increase from the previous year. Once considered a less prevalent form of fraud compared to romance scams or investment schemes, voice phishing is now rapidly expanding, largely due to advancements in AI.
Generative AI, with its capability to produce eerily lifelike voice and video content, has profoundly amplified the realism and effectiveness of these criminal schemes. Police officials report that the technology has evolved to a point where it can replicate not only a person’s voice and face but also their unique tone, speech patterns, and intonation. This advanced mimicry makes it ‘nearly impossible for even close family members to detect the deception.’ Scammers can now generate deepfake videos and voice clones with alarming ease, often requiring just a handful of social media images and a 30-second voice clip.
A notable case uncovered in Busan in May 2024 illustrates the severity of the threat. A woman in her 60s, identified as ‘B,’ received a phone call seemingly from her daughter. The caller claimed to be detained after co-signing a loan for an unreachable friend. Alarmed, B withdrew ₩20 million from her bank. Police later confirmed that the voice on the call was an AI-generated mimicry of her daughter’s voice.
Another alarming incident involved a deepfake video. In a separate case, a victim received a video call appearing to be from their daughter, who claimed to be in distress. In reality, the daughter was traveling safely. An investigation by Jeju Seobu Police Station revealed that the video was generated using deepfake technology, leveraging photos and updates the daughter had posted online. The video was so convincing that detectives initially suspected the daughter might have staged the footage. This marked the first confirmed instance in South Korea of deepfake video technology being used in a voice phishing crime.
Also Read:
- Generative AI Reshapes Cybersecurity Landscape: A Dual-Edged Sword in Digital Defense
- Canadian Report Reveals Doubled Generative AI Adoption Amidst Plummeting Social Media Trust
Historically, victims of voice phishing in South Korea have predominantly been individuals in their 50s and 60s. According to the National Police Agency, this demographic accounted for 47.6 percent of the 12,339 cases reported in the first half of 2025. Police attribute this vulnerability to their relatively higher asset levels and potentially limited familiarity with advanced digital threats. With AI now enabling the manipulation of both video and voice, international criminal syndicates are increasingly targeting South Korean citizens, posing a significant challenge to law enforcement and financial institutions.


