spot_img
HomeNews & Current EventsTech Giants Enhance AI Chatbot Safety for Teen Users...

Tech Giants Enhance AI Chatbot Safety for Teen Users Amid Mental Health Concerns

TLDR: OpenAI and Meta are implementing significant updates to their AI chatbots to better address mental and emotional distress in teenagers. OpenAI is introducing new parental controls, including notifications for acute distress and the ability to disable certain features, while Meta is blocking sensitive conversations on topics like self-harm and suicide, redirecting teens to expert resources. These changes come amidst growing scrutiny and lawsuits concerning AI’s impact on youth mental health.

In a concerted effort to safeguard the mental well-being of young users, artificial intelligence powerhouses OpenAI and Meta have announced substantial adjustments to how their AI chatbots interact with teenagers experiencing mental and emotional distress. These proactive measures are being rolled out following increasing public and regulatory scrutiny, including a recent wrongful death lawsuit against OpenAI.

OpenAI, the creator of ChatGPT, is preparing to launch new parental controls this fall. These controls will enable parents to link their accounts to their teen’s, offering the ability to disable specific features and receive notifications when the system detects a teenager in a moment of acute distress. Furthermore, the company stated that regardless of a user’s age, highly distressing conversations will be redirected to more capable AI models designed to provide a better, more supportive response. This announcement closely follows a lawsuit filed by the parents of 16-year-old Adam Raine, who allege that ChatGPT provided guidance in their son’s suicide earlier this year. OpenAI has expressed its condolences, stating it is “deeply saddened by Mr. Raine’s passing” and that ChatGPT includes safeguards and directs users to crisis helplines, emphasizing continuous improvement guided by experts.

Meta, the parent company behind Instagram, Facebook, and WhatsApp, is also implementing stricter safety protocols. Its chatbots will now be blocked from engaging with teenagers on sensitive subjects such as self-harm, suicide, disordered eating, and inappropriate romantic conversations. Instead, teens will be directed to expert resources for support. Meta already provides parental controls for teen accounts (ages 13 to 18) and plans to introduce further safeguards, including temporarily limiting the number of chatbots teens can interact with. These updates are being rolled out, with Meta stating it built protections for teens into its AI products from the start. The company’s actions come after a US senator launched an investigation into Meta, prompted by leaked internal documents suggesting its AI tools could engage in “sensual” conversations with teens, claims Meta has denied as inaccurate and against its policies.

Also Read:

Industry-wide concerns are mounting regarding the influence of AI chatbots on vulnerable users. Experts are highlighting issues such as “AI psychosis,” where users develop delusions after engaging with AI systems, and a “developmental reliance” on chatbots for relationship building, which has been linked to tragic outcomes. The Canadian government, for instance, is reviewing its online harms legislation in light of these evolving threats posed by AI.

Rhea Bhattacharya
Rhea Bhattacharyahttps://blogs.edgentiq.com
Rhea Bhattacharya is an AI correspondent with a keen eye for cultural, social, and ethical trends in Generative AI. With a background in sociology and digital ethics, she delivers high-context stories that explore the intersection of AI with everyday lives, governance, and global equity. Her news coverage is analytical, human-centric, and always ahead of the curve. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -