Shadow AI is a growing threat, but one companies can harness: Opinion
Source: Business Times
Article Date: 03 Dec 2025
Author: Lisa Bouari
Tightening security in a way that erodes internal trust can drive the behaviour further underground.
More companies are now embracing artificial intelligence, but employees are moving faster. In Asian countries such as China, India, South Korea and Japan, people’s comfort with AI is notably higher than the global average, according to the 2025 EY AI Sentiment Index.
But when organisations are slow to implement AI solutions, tech-savvy staff may turn to “shadow AI” – using their own unauthorised AI tools for work. Often, these tools are free and untracked. Employees develop their own workflows around them.
The scale of this invisible risk is significant. A 2023 Salesforce survey of 14,000 global workers found that half of people using generative AI at work were doing so without approval or guidance from their employers.
Worse, Cisco’s 2025 Data Privacy Benchmark Study revealed that 42 per cent of respondents entered non-public company information and 46 per cent entered employee names or information into AI tools. This poses a clear risk of data breaches with potential operational, regulatory and legal ramifications.
Regulated utilities, consumer goods companies and professional services firms have all suffered the consequences of such data breaches, including damage to their reputations.
The risks of shadow AI
If staff use AI co-pilots and other tools without the knowledge or approval of IT security teams, dangers can emerge quickly. While employees may only seek to improve their productivity and deliver strong results, they are unwittingly exposing sensitive customer information or their companies’ intellectual property (IP) or proprietary data.
Many public AI tools retain these inputs to train their models, meaning that with the right prompts, proprietary data could inadvertently be retrievable by third parties.
The risk goes beyond data leakage. One of the big use cases for AI is writing code to debug enterprise systems. However, unvetted AI tools connected to internal systems via browser extensions or application programming interfaces can introduce malicious code into these networks.
As a result of employees using shadow AI, companies in Europe have been hit with investigations into breaches of the General Data Protection Regulation. Others have been ridiculed for sending AI hallucinations to clients and have seen IP leaked to competitors, forcing heavy spending on containment and remediation.
So what can companies do to prevent such damage?
Harness the innovators
First, act to prevent further data loss. Companies should adopt monitoring tools to detect unsanctioned use of AI, tracking browser extensions, network traffic and uploads. Organisations should train staff on AI risks, particularly regarding data privacy and hallucinations. Offer enterprise versions of approved AI tools that come with data isolation and audit controls.
Notably, the natural temptation – to announce strict bans on specific tools or unvetted AI – often backfires. Tightening security in a way that erodes internal trust can inhibit innovation and drive the behaviour further underground.
A new approach is needed. Instead of issuing outright – and often unenforceable – bans, companies should provide sandboxes. These are safe environments, isolated from production systems and using restricted datasets, where staff can experiment.
They can help with AI training and workforce upskilling while also encouraging development of new ideas for applying AI. Such an approach promotes collaboration between IT, business, marketing, risk and legal units within the organisation.
Gamifying AI development offers a way to build on this. Leaders could encourage cross-functional teams to re-engineer entire business workflows. By rewarding prototypes that go into production, companies can stimulate creativity and create a pipeline for imagining, testing and scaling AI applications that might become game-changers for a business.
Shadow AI certainly brings risk. But so does falling behind in AI implementation.
Companies need to be agile in their governance frameworks even as they ensure IT security. Instead of banning employees from bringing in new AI tools and driving creative thinkers away, they should provide safe and stimulating spaces for employees to discover and expand what the technology can do.
If half your employees already using AI are doing so on unauthorised tools, harnessing that grassroots innovation rather than crushing it might be the best way to improve your company’s chances of long-term success.
The writer is regional AI leader, Oceania, at EY. The views presented do not necessarily reflect the views of the global EY organisation or its member firms.
Source: The Business Times © SPH Media Limited. Permission required for reproduction.
1