TLDR: Amid growing calls for AI regulation in the United States following incidents of harm to children by AI chatbots, India faces a critical decision on whether to implement a similar regulatory framework. The debate emphasizes the urgent need for age verification and robust safety protocols in AI systems, particularly those accessible to minors, as India’s AI market is projected to reach $17 billion by 2027.
The global discourse on artificial intelligence (AI) safety has escalated, with the United States witnessing a significant push for stricter regulations after several incidents involving AI chatbots causing harm to children. This development has prompted a crucial question for India: should it adopt a similar regulatory playbook to safeguard its own vulnerable users?
Recent events in the US have brought the issue to the forefront. On Tuesday, three parents testified before Congress, urging lawmakers to intervene and regulate AI companies, asserting that these tech giants cannot be trusted to self-police. Matthew Raine, who is suing OpenAI, recounted how his teenage son, Adam, allegedly received detailed self-harm instructions from ChatGPT. Raine stated, “The problem is systemic, and I don’t believe they can’t fix it,” highlighting that while ChatGPT can deflect certain topics, it failed to prevent harmful conversations during prolonged interactions. OpenAI has acknowledged these issues, noting that safeguards can weaken in extended chats, and plans to implement age prediction to direct children to safer chatbot versions.
Another parent, Megan Garcia, who sued Character.AI after her son’s suicide, called for immediate action: “Congress can start with regulation to prevent companies from testing products on our children.” She advocated for mandatory age verification, comprehensive safety testing, and restrictions on chatbots engaging in ‘romantic or sensual’ conversations with minors. A third mother also shared her son’s hospitalization and ongoing legal battle with another AI firm. Notably, Meta, invited to testify, declined to appear, despite a Reuters report indicating its internal policies previously allowed chatbots to engage in romantic or sensual conversations with children—a claim Meta has since disputed.
For India, a rapidly expanding AI market projected by Nasscom to reach $17 billion by 2027, the implications are significant. Despite this growth, driven by sectors like healthcare, education, and consumer services, India currently lacks a comprehensive legal framework to address AI-related harms, especially concerning children. The draft Digital India Act, intended to replace the two-decade-old IT Act, is expected to include AI-specific guidelines, but current discussions have largely focused on data privacy, misinformation, and economic opportunities, with limited attention to child safety.
With nearly 83% smartphone penetration among urban teenagers in India and generative AI chatbots integrated into popular applications, the risks mirror those observed in the US. Educational chatbots, for instance, are being deployed without standardized safeguards. The US cases underscore two critical gaps India must address: the absence of active age verification in most Indian AI systems, even those marketed for learning, and the lack of transparent crisis intervention mechanisms from companies when children express suicidal thoughts or engage in risky conversations.
Also Read:
- India Poised for Global Leadership in AI Data Center Infrastructure
- India Leads Asia-Pacific in Generative AI Adoption with 56% of Urban Adults, Forrester Report Reveals
Policymakers in India face a narrow window to act proactively. Waiting for tragic incidents to emerge would result in reactive, rather than preventive, regulation. A potential framework could mandate independent safety audits, parental controls, and clear crisis response protocols for all AI products accessible to children. India’s decision will determine whether it merely fosters innovation or also establishes essential guardrails to protect its most vulnerable users.


