TLDR: A significant policy flaw has been discovered in Microsoft’s Copilot Agent governance framework, allowing any authenticated user within an organization to bypass security controls and access AI agents, including those designated as private or privileged. The vulnerability, tracked as CVE-2025-XXXX with a critical CVSS 3.1 score of 9.1, stems from a failure to enforce policies consistently across all API endpoints. Microsoft has released a patch in August 2025 and urges administrators to apply the update and implement additional security measures.
Microsoft has recently disclosed a critical policy flaw within its Copilot Agent governance framework, which could allow any authenticated user to access and interact with AI agents across an organization, effectively bypassing established policy controls. This vulnerability, identified as CVE-2025-XXXX and assigned a critical severity rating of 9.1 on the CVSS 3.1 scale, poses substantial security and compliance risks for enterprises utilizing Microsoft Copilot for Microsoft 365.
The core of the issue lies in the inconsistent enforcement of Copilot Agent Policies. While administrators can define per-user or per-group policy rules to restrict visibility and operation of specific AI agents through the Microsoft 365 admin center’s management APIs, these controls are not uniformly applied to the broader Graph API endpoints. These Graph API endpoints are commonly used by both graphical user interface (GUI) clients and script-based integrations for discovering and invoking agents.
Consequently, any user with basic access to Graph API calls—a privilege granted by default to all Microsoft 365 licenses—can retrieve a complete roster of all AI agents within the organization. This includes agents explicitly marked as ‘private’ or intended for privileged roles. More alarmingly, these unauthorized users can then invoke these agents by sending prompts to their execution endpoints, entirely circumventing the intended policy checks.
As a Microsoft engineer who discovered and reported the flaw candidly stated, ‘We thought tenant administrators had exclusive visibility into their AI agents, but the enforcement plane in Graph was wide open.’ This oversight undermines the fundamental principles of a zero-trust security posture and could expose sensitive automation workflows, such as privileged credential rotation, data classification orchestration, or executive briefing preparations, to unauthorized actors.
Also Read:
- The Imperative of Secure Browsers in the Era of AI Agents: A New Cybersecurity Frontier
- ChatGPT Agent Successfully Bypasses Cloudflare CAPTCHA, Sparking Cybersecurity Alarm
Microsoft responded swiftly to the discovery, verifying the proof-of-concept exploit within 24 hours. The company subsequently issued a patched version of the policy enforcement middleware in August 2025 and notified affected customers through the Microsoft 365 Message Center. Administrators are strongly advised to apply the August 2025 update to Copilot for Microsoft 365 immediately. Additionally, industry experts recommend that organizations proactively monitor their agents catalog for any irregular access patterns and adopt compensating security controls to mitigate potential risks.


