TLDR: Deepfakes, hyper-realistic AI-generated media, are rapidly evolving and India is among the top five most exposed nations due to its large online population and limited awareness. Experts highlight the malicious use of deepfakes for misinformation, fraud, political manipulation, and gender-based abuse, emphasizing the urgent need for stricter regulations, public education, and advanced detection tools to combat this growing threat.
In an increasingly digital world where ‘seeing is no longer believing,’ artificial intelligence (AI) is generating hyper-realistic videos, audios, and images, commonly known as deepfakes. These synthetic media, a portmanteau of ‘deep learning’ and ‘fake,’ have evolved from a remarkable showcase of AI’s creative capabilities into a menacing weapon. Jaspreet Bindra, Co-founder of AI & Beyond, noted that approximately 95% of deepfakes are used maliciously, primarily to harass women, defame public figures, or disseminate misinformation.
The Anatomy and Evolution of Deepfakes
Deepfakes are primarily produced by a subset of AI called a Generative Adversarial Network (GAN). A GAN involves two competing neural networks: a ‘generator’ that creates fake content and a ‘discriminator’ that assesses its realism. Through iterative loops, the generator refines the content until it is nearly indistinguishable from reality. While deepfakes predate modern generative AI, new diffusion models have significantly amplified their threat. Deep learning algorithms can now perfectly mimic facial expressions, body language, and speech cadence, making detection increasingly challenging.
Tracing the Origins of Synthetic Media
The manipulation of video with machine learning began in the 1990s. In 1997, Christoph Bregler, Michele Covell, and Malcolm Slaney at Interval Research Corporation developed ‘Video Rewrite,’ a system that could automatically reanimate human faces in videos, syncing lip movements to new audio. The biggest breakthrough came in 2014 with Ian Goodfellow’s development of the Generative Adversarial Network (GAN). The term ‘deepfake’ gained popularity in late 2017 through a Reddit user who posted AI-edited videos, often involving non-consensual pornographic material. In 2023, AI scholar Nina Schick produced the world’s first ‘certified’ deepfake video, incorporating an industry watermark to promote transparency.
India’s High Exposure and Notable Cases
India is among the top five countries experiencing a surge in deepfake cases, particularly in misinformation and gender-based abuse. This high exposure is attributed to its massive online population and limited public awareness. Notable incidents in India include:
Rana Ayyub Incident (2018): A deepfake pornographic video was used to discredit the journalist.
Ankur Warikoo Case (2024): Deepfake videos of the entrepreneur were used to manipulate the stock market with fictitious scams.
NSE Officials Scam (2025): Impersonations of National Stock Exchange representatives in deepfake videos led to financial losses for investors.
Gaurav Bhatia Case (2024): Deepfake videos of the advocate and BJP spokesperson were part of a political misinformation campaign.
Celebrity Deepfakes: Bollywood actors like Rashmika Mandanna, Katrina Kaif, and Alia Bhatt have been victims of explicit deepfake content.
Nirmaan Malhotra Case: A deepfake video of the social media influencer was used to suggest fake investment advice.
National Security Expert Amit Dubey highlighted that deepfake creation has transitioned from a high-skill activity to something achievable by anyone with basic tools and mobile apps, enabling rapid spread of misinformation. He warned of increasing use for political manipulation, identity theft, and voice-based scams, citing instances of AI voice cloning tricking people into transferring money.
Challenges and Solutions: A Battle of AI vs. AI
Experts agree that technology alone cannot solve the deepfake problem. Jaspreet Bindra emphasized the need for governments to establish strict regulations for deepfake creation and distribution, along with mandatory media literacy education in schools, similar to Finland and the Netherlands. He described deepfakes as the ‘digital equivalent of an acid attack,’ stressing that it requires a combined effort from governments, researchers, companies, and society.
Legally, India currently relies on existing provisions within the Information Technology Act, 2000, and the Indian Penal Code. However, Cyber law expert Sakshar Duggal pointed out the absence of dedicated laws for synthetic media manipulation, making it difficult to trace anonymous perpetrators. The Ministry of Electronics and Information Technology (MeitY) has warned AI and social media companies to curb deepfakes, with IT Minister Ashwini Vaishnaw indicating new regulations and penalties. Prime Minister Narendra Modi has also raised concerns, discussing the issue with OpenAI.
Protecting Yourself from Deepfakes
Experts advise several steps for protection:
Limit sharing personal content: Avoid unnecessary sharing of photos, voice notes, or videos.
Fact-check before sharing: Utilize tools like Google Reverse Image Search, InVID, Hive Moderation, or Microsoft’s Video Authenticator.
Report suspicious content: Use reporting features on social media platforms for AI, deepfake, or fabricated media.
Be vigilant: Exercise caution with sensational clips, especially those targeting public figures.
Advocate for education: Promote digital literacy in schools and universities.
Also Read:
- AI-Powered Online Threats Escalate: Activist Reveals Terrifyingly Realistic Attacks
- AI Amplifies Gendered Violence Against Women Journalists, UN Warns on Impunity Day
Bindra advocates for digital watermarking systems for genuine content and a global digital ethics charter to balance innovation with safety. While tech giants like Meta, Google, and OpenAI have announced watermarking and labeling systems, widespread adoption remains inconsistent. As the lines between truth and fabrication blur, deepfakes represent a new form of digital warfare targeting trust itself, underscoring that the battle requires ‘smarter humans’ as much as smarter machines.


