spot_img
HomeNews & Current EventsCyera Establishes Research Labs, Releases Inaugural AI Data Security...

Cyera Establishes Research Labs, Releases Inaugural AI Data Security Report Highlighting Critical Readiness Gap

TLDR: Cyera has launched Cyera Research Labs and published its first ‘2025 State of AI Data Security Report,’ revealing a significant gap between AI adoption and robust data security measures. The report, based on a survey of over 900 IT and security leaders, indicates that while 83% of enterprises use AI, only 13% have adequate visibility into how AI interacts with sensitive data, posing substantial risks.

NEW YORK – Cyera, a rapidly expanding data security firm, today announced the official launch of Cyera Research Labs, a dedicated division focused on providing data-driven insights at the intersection of artificial intelligence and data security. This initiative coincides with the release of its groundbreaking ‘2025 State of AI Data Security Report,’ developed in collaboration with CyberSecurity Insiders. The report underscores a critical disparity in enterprise AI readiness.

The comprehensive study, which surveyed more than 900 IT and security leaders, found that a staggering 83% of organizations are already leveraging AI technologies. However, a concerning statistic reveals that only 13% possess strong visibility into how AI systems access and interact with their sensitive data. This widening ‘readiness gap’ exposes businesses to unforeseen vulnerabilities and compliance challenges.

Key findings from the 2025 State of AI Data Security Report highlight several areas of concern:

Autonomous Agents Drive Risk: Three-quarters of respondents (76%) identified autonomous AI agents as the most challenging to secure. Furthermore, a mere 9% of organizations currently monitor AI activity in real-time.

Underdeveloped AI Identity Management: Only 16% of organizations have established AI as a distinct identity class with dedicated security policies. Alarmingly, 21% of respondents admit to granting broad data access to AI systems by default.

Controls Lag Incidents: The report indicates that two-thirds of surveyed organizations (66%) have already detected instances of AI over-accessing sensitive data. Despite this, only 11% possess the capability to automatically block such risky activities.

Immature Governance: A significant governance deficit was identified, with only 7% of respondents having a dedicated AI governance committee in place. Moreover, just 11% feel adequately prepared for the complexities of emerging AI regulations.

An illustrative anecdote from the report underscores the practical implications of these findings: one participant recounted how the first security alert regarding AI misuse originated not from a traditional Security Information and Event Management (SIEM) system, but from a sales manager. The manager questioned why an AI copilot could ‘magically’ access a sensitive pricing deck, revealing a scenario of ‘default access, missing guardrails, and no monitoring at the prompt layer’ rather than a sophisticated exploit.

Also Read:

The ‘2025 State of AI Data Security Report’ aims to provide a foundational cross-industry baseline for AI governance. It aligns with the OWASP Top 10 for LLM Applications and offers insights into AI adoption, data visibility, monitoring practices, control mechanisms, access models, and overall governance readiness. Cyera’s objective with this report is to equip security and data leaders with a pragmatic roadmap for implementing effective AI security programs, emphasizing the necessity of protecting the data that enriches AI systems.

Nikhil Patel
Nikhil Patelhttps://blogs.edgentiq.com
Nikhil Patel is a tech analyst and AI news reporter who brings a practitioner's perspective to every article. With prior experience working at an AI startup, he decodes the business mechanics behind product innovations, funding trends, and partnerships in the GenAI space. Nikhil's insights are sharp, forward-looking, and trusted by insiders and newcomers alike. You can reach him out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -