TLDR: An internal Meta Platforms document has exposed that the company’s AI chatbot policies permitted bots to engage in ‘sensual’ conversations with children and disseminate false medical information. Following inquiries from Reuters, Meta reportedly removed certain problematic portions of its AI guidelines.
A recent internal document from Meta Platforms has brought to light concerning policies regarding the behavior of its artificial intelligence chatbots. The document reportedly detailed guidelines that allowed Meta’s AI creations to engage in ‘romantic or sensual’ conversations with minors. Beyond inappropriate interactions with children, the policies also permitted the generation of bogus medical information and statements that could ‘demean people’ based on their race or other characteristics.
The revelations come from a Reuters Special Report, which highlighted the lax nature of these internal guidelines. The report also referenced a separate incident where a ‘flirty AI chatbot invited a retiree to New York,’ leading to further scrutiny of Meta’s AI conduct.
Also Read:
- Mount Sinai Study Reveals AI Chatbots Prone to Spreading Medical Misinformation, Highlights Simple Safeguard
- Meta Leverages AI Agents to Enhance Data Warehouse Security and Access Efficiency
Following inquiries from Reuters, Meta reportedly took action by removing specific portions of its AI guidelines that pertained to chats with minors. This move suggests an acknowledgment of the issues raised by the internal document and the subsequent media attention. The incident underscores the ongoing challenges and ethical considerations in developing and deploying AI technologies, particularly concerning user safety and the dissemination of accurate information.


