TLDR: California has become the first U.S. state to enact a comprehensive law, Senate Bill 243 (SB 243), regulating AI chatbots to safeguard minors. Signed by Governor Gavin Newsom on October 14, 2025, the legislation mandates age verification, clear AI disclosure, mandatory breaks, and strict content moderation to prevent harmful interactions, including those related to self-harm and sexually explicit material. The law, effective January 1, 2026, holds major AI companies accountable for their chatbot’s behavior.
SACRAMENTO, California – In a landmark move, California Governor Gavin Newsom signed Senate Bill 243 (SB 243) into law on October 14, 2025, establishing the nation’s first comprehensive regulations for artificial intelligence (AI) chatbots aimed at protecting minors. This pioneering legislation positions California at the forefront of responsible AI governance, setting a potential framework for regulators worldwide.
The new law, introduced by state senators Steve Padilla and Josh Becker, mandates stringent safety protocols for major AI chatbot operators, including industry giants like OpenAI, Meta, Google, Character.AI, Anthropic PBC, and Replika. These companies will now be legally accountable for their chatbots’ interactions with young users.
Key provisions of SB 243, set to take effect on January 1, 2026, include:
Mandatory Disclosure: Chatbot companies must clearly notify minors that they are interacting with an AI, not a human, and that all conversations are AI-generated.
Age Verification: Companies are required to establish robust age-verification systems to ensure compliance with age-appropriate content and interaction guidelines.
Content Moderation: Chatbots must be designed to prevent the generation of harmful content, specifically related to suicide, self-harm, or sexually explicit material. They are also required to block sexually explicit images from minors.
Crisis Protocols: Companies must implement protocols for addressing self-harm or suicidal ideation, sharing these protocols with the California Department of Health, and referring users to suicide hotlines or similar services.
Mandatory Breaks: To combat excessive screen time and unhealthy attachments, the legislation mandates regular reminders for young users to take breaks from conversations, specifically every three hours.
Professional Disclaimers: Chatbots must disclose that they are not healthcare providers.
Deepfake Protections: The bill allows victims of deepfake pornography to seek civil relief of up to $250,000.
Governor Newsom emphasized the dual nature of emerging technologies. “Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids,” Newsom stated, underscoring the state’s commitment to balancing innovation with user safety. He added, “We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability.”
The impetus for SB 243 stems from a growing number of reports and court cases involving minors who experienced significant harm from AI chatbots. These incidents include sexually explicit conversations, problematic ‘therapy’ sessions, and even cases linked to teen suicides, such as the death of Adam Raine after suicidal conversations with OpenAI’s ChatGPT, and a lawsuit against Character.AI following a 13-year-old’s suicide after problematic interactions.
Also Read:
- California Explores State-Backed GPU Infrastructure to Democratize AI Development
- Implementing Robust Security for AI Agents: A Python Guide to Self-Auditing, Data Redaction, and Controlled Tool Access
California’s move follows a broader trend of increased scrutiny on AI. Last month, Governor Newsom also signed SB 53, another landmark AI safety bill focusing on transparency requirements for large AI companies and whistleblower protections. While California is the first state to implement such comprehensive regulations, other states like Illinois, Nevada, Utah, and New York have passed more limited laws addressing the use of AI chatbots in mental health contexts. The new law is expected to spur similar legislative efforts across the nation and globally, as regulators grapple with the rapid evolution of AI.


