TLDR: Meredith Whittaker, President of Signal, has issued a stark warning regarding the inherent security and privacy risks posed by agentic AI systems. Speaking at the UN’s AI for Good summit, Whittaker highlighted that the pervasive ‘root’ level access required by these AI agents to perform tasks on behalf of users creates significant vulnerabilities, potentially undermining the robust privacy and security features of applications like Signal.
Geneva, Switzerland – Meredith Whittaker, the President of the secure messaging application Signal, delivered a critical warning on July 8, 2025, concerning the profound security and privacy implications of agentic artificial intelligence (AI) systems. Her remarks were made during the ‘Delegated decisions, amplified risk’ session at the United Nations’ AI for Good summit.
Whittaker emphasized that the fundamental design of agentic AI, which aims to autonomously perform tasks for users, necessitates an unprecedented level of access to personal data and system functionalities. She illustrated this by explaining, ‘To make a booking for a restaurant, it needs to have access to your browser to search for the restaurant, and it needs to have access to your contact list and your messages.’ This extensive access, she argued, creates a ‘serious attack vector’ for potential security vulnerabilities.
According to Whittaker, the industry’s current trajectory, involving billions of dollars invested in developing these ‘powerful intermediaries,’ overlooks a critical security paradox. Unlike applications such as Signal, which are meticulously designed to operate without ‘root’ access to a user’s entire system to mitigate cybersecurity risks, agentic AI demands such pervasive control. Whittaker stated, ‘The Signal messenger app you’re using is built for iOS or Android, or a desktop operating system, but in none of those environments will it have root access to the entire system. It can’t access data in your calendar. It can’t access other things.’ She cautioned that ‘Having access to Signal would ultimately undermine our ability at the application layer to provide robust privacy and security.’
She further elaborated on the technical risks, noting that for AI agents to operate autonomously, they often require ‘root permission’ to drive processes across an entire system, accessing various databases, potentially ‘in the clear’ without encryption. This ‘threatens to break the blood-brain barrier between the application layer and the OS layer by conjoining all of these separate services and muddying their data.’
While Whittaker’s primary focus was on the inherent risks, other experts at the summit, such as Heinen, suggested potential mitigation strategies. One approach proposed was the deployment of small language models, which could be trained and deployed based on subsets of enterprise data, aligning with specific data access policies for different user categories. This method aims to prevent inadvertent information leakage.
Also Read:
- Google’s Gemini AI Gains Default Access to Android Communications, Raising Privacy Concerns
- OpenID Foundation Launches Initiative to Secure AI Identity Management
Whittaker’s warning underscores a growing concern among privacy advocates about the trade-offs involved in the convenience offered by agentic AI versus the potential erosion of personal data security and privacy.


