TLDR: The Open Worldwide Application Security Project (OWASP) has introduced a new AI Vulnerability Scoring System (AIVSS) at its Global AppSec conference. This system is designed to address the unique and complex security challenges posed by agentic artificial intelligence, which traditional vulnerability assessment frameworks like CVSS are ill-equipped to handle.
WASHINGTON, D.C. – The Open Worldwide Application Security Project (OWASP) made a significant stride in artificial intelligence security by unveiling its new AI Vulnerability Scoring System (AIVSS) at the Global AppSec conference on Friday, November 7, 2025. This innovative framework aims to provide a standardized methodology for identifying, assessing, and mitigating vulnerabilities specific to agentic and other AI systems, a critical need as AI adoption accelerates.
Traditional threat-modeling frameworks, such as the Common Vulnerability Scoring System (CVSS), were developed for deterministic software and fall short when evaluating the non-deterministic nature and autonomous capabilities of modern AI. Ken Huang, an AI expert, author, and adjunct professor, highlighted this inadequacy during his presentation. “These [traditional frameworks] assume traditional deterministic coding. We need to deal with the non-deterministic nature of agentic AI,” Huang stated.
The AIVSS project, co-led by Huang alongside Zenity Co-Founder and CTO Michael Bargury, Amazon Web Services Application Security Engineer Vineeth Sai Narajala, and Stanford University Information Security Officer Bhavya Gupta, is built upon the foundation of CVSS but incorporates crucial ‘AI special sauce.’ It includes an agentic-capabilities assessment that accounts for risk-amplifying factors such as autonomy, non-determinism, and tool use. The scoring mechanism involves taking a CVSS base score, adding the agentic-capabilities assessment, dividing the sum by two, and then multiplying by an environmental context factor.
Huang emphasized that while autonomy is not inherently a vulnerability, it significantly elevates risk. The dynamic and ephemeral identities used by AI agents, for instance, present a challenge that fixed machine identities in traditional software do not. “With agentic AI, you need the identity to be ephemeral and dynamically assigned,” Huang explained, noting that this necessitates granting privileges for tasks, which in turn introduces new security considerations.
Key risks that AIVSS is designed to quantify include tool misuse, goal manipulation, and access control violations. The framework also addresses emerging threats like prompt injection, which OWASP has identified as a top AI security risk. The AIVSS website (aivss.owasp.org) provides comprehensive guides for structured AI risk assessment and a scoring tool to help security professionals calculate AI-specific risks.
Also Read:
- Wiz Unveils AI Security Agents with Human-Like Investigation Capabilities
- Cybersecurity Alert: Malicious Actors Exploit AI Agent Identities to Evade Bot Defenses
This initiative marks a pivotal moment in cybersecurity, offering a much-needed, quantifiable approach to secure the rapidly evolving landscape of artificial intelligence.


