spot_img
HomeNews & Current EventsChatterbox Labs Introduces AIMI for Agentic AI Security, Fortifying...

Chatterbox Labs Introduces AIMI for Agentic AI Security, Fortifying Tools Calling Capabilities

TLDR: Chatterbox Labs has launched AIMI for Agentic AI Security, a new product pillar designed to address the unique security challenges of agentic AI models that interact with external tools. This solution automates the testing of AI agents’ defenses against threats like system exploitation and data theft, ensuring safe and secure deployment of AI agents across various platforms.

Chatterbox Labs, a long-standing leader in AI security and safety testing, has announced the release of its new Agentic AI Security (tools calling) pillar, integrated into its AIMI product portfolio. This significant development, unveiled on October 8, 2025, marks a crucial step in securing the rapidly evolving landscape of artificial intelligence, particularly as the industry shifts towards more autonomous AI agents.

Unlike traditional generative AI models that primarily respond to prompts with text, AI agents are designed to take actions on behalf of users by calling external tools such as databases, code execution environments, and payment systems. This capability, known as ‘tools calling,’ introduces a new dimension of security risks that necessitate specialized testing.

AIMI for Agentic AI is engineered to automate the process of testing these models’ built-in defenses for tools calling. The platform is model-agnostic, offering prebuilt connectors that conform to the OpenAI standard—a widely adopted default in open-source AI inferencing environments—and integrates with leading cloud AI systems including Amazon Bedrock, Google Vertex AI, and Anthropic.

Chatterbox Labs’ innovative approach involves pairing tools payloads with insecure text prompts across critical security facets. This includes evaluating vulnerabilities related to system and code exploitation, data theft and exfiltration, and infrastructure disruption. By doing so, AIMI can precisely measure whether an AI model effectively rejects insecure prompts or if it initiates potentially harmful tools calling processes. This capability allows enterprises to identify which AI models require the most additional security attention before deployment.

Crucially, all testing within AIMI occurs in a safe, controlled environment. The system is designed to prevent the actual execution of harmful tools calling, enabling organizations to understand an AI model’s security profile without risking damage to their enterprise systems.

Users have the flexibility to interact with AIMI through a browser-based user interface or to automate the entire testing process via APIs. Furthermore, Chatterbox Labs emphasizes that the solution operates on the user’s infrastructure, ensuring full control and data sovereignty.

Also Read:

As teams increasingly build and deploy AI agents, they face the challenge of selecting secure AI models. AIMI provides thorough, independent security evaluations, furnishing these teams with the essential metrics needed to make informed decisions and ensure the robust security of their agentic AI deployments.

Dev Sundaram
Dev Sundaramhttps://blogs.edgentiq.com
Dev Sundaram is an investigative tech journalist with a nose for exclusives and leaks. With stints in cybersecurity and enterprise AI reporting, Dev thrives on breaking big stories—product launches, funding rounds, regulatory shifts—and giving them context. He believes journalism should push the AI industry toward transparency and accountability, especially as Generative AI becomes mainstream. You can reach him out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -