spot_img
HomeAnalytical Insights & PerspectivesParental Guidance Urged as OpenAI's Sora AI Video App...

Parental Guidance Urged as OpenAI’s Sora AI Video App Blurs Reality for Children

TLDR: Parents are facing new challenges with the widespread availability of OpenAI’s Sora 2, a generative AI video application that creates highly realistic videos from text prompts. Experts warn of significant risks to children, including exposure to harmful content, the creation of convincing deepfakes using a ‘Cameo’ feature, and a severe lack of parental controls, making it difficult for kids to discern reality from AI-generated fabrications. Organizations like Common Sense Media have rated Sora as ‘unacceptable’ due to these concerns.

The digital landscape for children is rapidly evolving, presenting new complexities for parents with the emergence of advanced generative AI video applications like OpenAI’s Sora. Initially released to limited users, Sora 2 became more widely available in September 2025, allowing anyone to create remarkably realistic videos from simple text descriptions, fundamentally blurring the lines between authentic and fabricated content.

Sora, developed by the creators of ChatGPT, functions as a text-to-video generator. Users can input a description, such as ‘a girl walking her dog down a street at sunset,’ and the app will produce a corresponding video in moments. A particularly concerning feature in Sora 2, known as ‘Cameo,’ enables users to upload their own face and voice, facilitating the creation of highly convincing deepfakes. While some AI-generated videos may still exhibit minor glitches, their overall realism is often sufficient to deceive an untrained eye, especially that of a young child.

Experts and digital wellness advocates are sounding alarms over the potential dangers this technology poses to children and teens, who are still developing critical thinking skills and a firm grasp of reality. Robbie Torney, senior director of AI Programs at Common Sense Media, describes Sora as ‘like ChatGPT, but for video instead of text,’ noting that ‘users can create their own videos and scroll through a feed of AI-generated content similar to TikTok, except nothing is real.’ Common Sense Media has rated Sora as ‘unacceptable’ due to its inherent risks.

Key concerns highlighted by experts include a severe lack of content oversight, which allows harmful material—such as content promoting eating disorders, self-harm references, dangerous activities, and stereotypes—to easily bypass safety filters. Torney explicitly states that Sora ‘easily generates eating disorder content, self-harm references, and dangerous activities that ChatGPT blocks.’ Furthermore, the app provides minimal crisis resources or warnings for such content.

Parental controls within Sora are notably weak, offering only basic options like a personalized feed toggle, continuous scrolling toggle, and direct messages. This limited functionality means parents have virtually no way to monitor what their children are viewing, sharing, or creating, nor are there alerts for concerning behaviors. Yaron Litwin, a social media expert, emphasizes that Sora’s videos are ‘highly realistic and often difficult to distinguish from actual footage,’ exacerbating the risk of confusion and emotional harm.

The ‘Cameo’ feature, which allows a child’s likeness to be duplicated and misused in video form, raises significant deepfake concerns. Such technology has the potential to fuel conflicts among peers, spread rumors, damage reputations, and create profound confusion about what is real. Lucas Hansen, founder of CivAI, warns of a ‘liar’s dividend,’ where increasingly high-caliber AI videos could lead people to dismiss authentic content as fake, stating, ‘There is almost no digital content that can be used to prove that anything in particular happened.’ Reports indicate that Sora 2 may even lack the watermarks present in earlier versions, making AI-generated content even harder to detect.

Beyond individual harm, the technology presents broader societal risks, including the easy generation of disinformation, propaganda, and sham evidence that could lend credence to conspiracy theories or implicate innocent individuals. Jeannie Paterson, Co-Director at the Centre for AI and Digital Facts, points out that intellectual property issues surrounding generative AI remain unclear, with AI-generated videos featuring an individual’s face potentially being legally created without permission, leaving individuals to rely solely on defamation laws for protection.

Also Read:

In response to these growing concerns, experts advise parents to take proactive measures. Recommendations include keeping children off Sora, engaging in open conversations about online safety and responsible digital choices, and fostering healthy skepticism. Parents are encouraged to teach their children to be ‘investigators’—to question the source of videos, look for inconsistencies, and understand why certain content is circulating. Additionally, delaying and restricting access to such tools, using parental controls to block app downloads, and supervising any experimentation with these apps are strongly recommended, as these tools are generally deemed inappropriate for unsupervised children and young teens.

Ananya Rao
Ananya Raohttps://blogs.edgentiq.com
Ananya Rao is a tech journalist with a passion for dissecting the fast-moving world of Generative AI. With a background in computer science and a sharp editorial eye, she connects the dots between policy, innovation, and business. Ananya excels in real-time reporting and specializes in uncovering how startups and enterprises in India are navigating the GenAI boom. She brings urgency and clarity to every breaking news piece she writes. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -