TLDR: Meta is facing significant scrutiny over its AI assistant’s default setting, which allows user prompts, potentially containing sensitive personal information, to be publicly shared on its ‘Discover’ feed. This practice, coupled with allegations of exploiting an Android vulnerability for data collection, has raised alarms among privacy advocates and could lead to regulatory action.
Meta Platforms is currently under intense scrutiny regarding its artificial intelligence (AI) assistant’s data handling and disclosure practices, particularly concerning user prompts. Reports from PYMNTS.com indicate that Meta’s AI assistant may be publicly sharing user-submitted prompts, some of which have contained highly sensitive and private information.
The core of the controversy revolves around a feature launched earlier this year, where Meta’s AI app displays a pop-up warning that content entered by users, including personal or sensitive details, could be publicly shared. These prompts, which can include private data such as legal documents, personal identifiers, and even audio of minors, are reportedly published in the AI’s ‘Discover’ feed. A key point of contention is that this public sharing setting is enabled by default, requiring users to manually opt-out, a mechanism that privacy advocates argue is not offered by any other major chatbot service for proactively republishing private inputs.
This development comes amidst broader consumer anxieties regarding generative AI and data privacy. A PYMNTS Intelligence report, ‘Generation AI: Why Gen Z Bets Big and Boomers Hold Back,’ revealed that 36% of generative AI users are apprehensive about these platforms sharing or misusing their personal information. Similarly, 33% of non-users cite these same hesitations as a barrier to adopting the technology.
Adding to Meta’s privacy challenges, the company is also facing allegations of exploiting an Android system vulnerability, dubbed ‘Local Mess,’ to harvest web browsing data. This loophole, involving the mobile operating system’s localhost address, purportedly allowed Meta and Russian tech company Yandex to monitor users and correlate their behavior across various apps and websites, even when browsing in incognito mode or using other privacy protections. This collected data could potentially be linked to a user’s Meta account or Android Advertising ID. Meta has since stated it has halted sending data to localhost, characterizing the issue as a ‘miscommunication with Google’s policy framework.’
Also Read:
- Meta AI Assistant Expands Global Reach Across WhatsApp, Instagram, and Facebook
- Meta’s Highly Touted AI Product Faces User Backlash Over Performance Issues
Privacy watchdogs and experts are suggesting that both the AI prompt disclosure issue and the alleged ‘Local Mess’ exploitation could trigger significant regulatory action, particularly in the European Union and other jurisdictions. Meta is already embroiled in an $8 billion lawsuit concerning alleged data misuse, highlighting a pattern of ongoing legal and ethical challenges related to its privacy practices.


