spot_img
HomeNews & Current EventsUnsanctioned AI Tools Pose Significant Cybersecurity Threats to Enterprises,...

Unsanctioned AI Tools Pose Significant Cybersecurity Threats to Enterprises, Studies Reveal

TLDR: Recent studies from Mindgard and ManageEngine highlight a growing ‘shadow AI’ phenomenon within enterprises, where employees use unapproved AI tools, significantly increasing cybersecurity risks. Key concerns include widespread data exposure, lack of proper governance, and a critical skills gap among staff and even security teams. Experts urge organizations to establish clear AI governance frameworks, implement continuous training, and foster transparency to mitigate these escalating threats.

The proliferation of artificial intelligence (AI) tools within enterprise environments, often without formal oversight, is creating a substantial ‘shadow AI’ problem, leading to heightened security risks for organizations, according to recent studies. This trend, where employees integrate unapproved AI applications into their workflows, mirrors the earlier ‘shadow IT’ phenomenon but with potentially more severe consequences due to the sensitive nature of data processed by AI.

A survey conducted by Mindgard at the RSA Conference 2025 and InfoSecurity Europe 2025, involving over 500 cybersecurity professionals, revealed alarming statistics. A significant 56% of security professionals acknowledged that employees in their organizations are using AI tools without approval or oversight, with an additional 22% suspecting such usage. Furthermore, 87% of cybersecurity practitioners themselves are incorporating AI into their daily tasks, and nearly one in four admit to using personal ChatGPT accounts or browser extensions outside formal approval. This indicates that ‘shadow AI’ is not just an end-user issue but is prevalent even within security operations centers (SOCs), creating a critical blind spot for the very teams responsible for enterprise protection. Peter Garraghan, CEO and Co-founder at Mindgard, emphasized, “Shadow AI isn’t a future risk. It’s happening now, often without leadership awareness, policy controls, or accountability.”

The risks associated with this unsanctioned AI usage are profound. Mindgard’s research found that sensitive data, including internal documentation and customer records, is being entered into AI tools, with 12% of practitioners admitting they have no visibility into what data is being input. A striking 39% of organizations reported no clear ownership of AI risk, while another 38% vaguely assigned it to security teams, highlighting a significant failure in cross-functional governance.

Complementing these findings, a ManageEngine report, based on a survey of 350 IT decision-makers and 350 professionals in the U.S. and Canada, underscored that over 80% of tech leaders believe employee AI tool adoption is outpacing IT’s capacity to vet applications for safety. Nearly two-thirds of decision-makers identified data leakage or exposure as the primary risk. Ramprakash Ramamoorthy, Director of AI Research at ManageEngine, stated, “Shadow AI represents both the greatest governance risk and the biggest strategic opportunity in the enterprise.”

Further insights from ManageEngine’s ‘Navigating AI Anxiety: A/NZ Organisations in 2025 report,’ which surveyed 300 ICT professionals in Australia and New Zealand, revealed that while 93% of organizations have adopted AI, 57% feel anxious about its integration. A staggering 97% reported a lack of AI-related skills within their organizations, particularly in AI governance and model training. Vinayak Sreedhar, Country Head at ManageEngine for ANZ, noted, “It’s a very startling kind of data. There’s a high level of adoption, yet a lot of apprehension. This primarily comes down to a skills issue.” He warned that “a poorly implemented AI system is potentially a cybersecurity nightmare” and that a skills gap can lead to employees unknowingly compromising data integrity, stressing, “Ignorance leads to issues.”

Also Read:

To combat these escalating risks, experts advocate for robust governance and proactive strategies. The World Economic Forum, in collaboration with the University of Oxford, in their ‘AI and Cybersecurity: Balancing Risks and Rewards’ report, recommends a lifecycle-based approach to AI security, integrating safeguards from early design stages and maintaining continuous vigilance. This includes regular risk assessments, adversarial testing, and a strong incident response strategy. Both Mindgard and ManageEngine emphasize the critical need for clear ownership of AI risk, enforced policies, and coordinated governance involving security, legal, compliance, and executive teams. Continuous and inclusive training for employees on AI risks and proper usage is also deemed essential to foster a secure and transparent AI adoption environment.

Ananya Rao
Ananya Raohttps://blogs.edgentiq.com
Ananya Rao is a tech journalist with a passion for dissecting the fast-moving world of Generative AI. With a background in computer science and a sharp editorial eye, she connects the dots between policy, innovation, and business. Ananya excels in real-time reporting and specializes in uncovering how startups and enterprises in India are navigating the GenAI boom. She brings urgency and clarity to every breaking news piece she writes. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -