Nearly 90% of AI usage in enterprise level businesses globally goes under the radar of IT teams, according to the latest research.
Which is quite worrying when you consider more than a third (38%) of employees who use AI for work admit to sharing sensitive data with AI apps without their employer’s knowledge or consent.
With Shadow AI increasing, mostly with the use of consumer “generative” AI apps like ChatGPT and Claude, businesses are at a crossroads when it comes to taking advantage of one of the most disruptive technologies we’ve seen, and putting sensitive data at risk.
What is Shadow AI?
Shadow AI is any application used without the knowledge of IT that carries some element of artificial intelligence.
You’ll most likely think of ChatGPT and those types of apps, but it could be anything that uses AI in any way, from CRM platforms that use AI to interrogate customer interactions, to service desks, to meeting recording tools that produce AI generated transcripts or notes.
And Shadow AI is on the rise at a rapid pace.
In the UK alone, 71% of employees admit to using unapproved AI tools at work, more than half admit to using these tools at least once a week, according to research by Microsoft.
Shadow AI vs Shadow IT
| Shadow IT | Shadow AI | |
| What it is | Use of unauthorised software, hardware, or systems outside IT approval | Use of unauthorised AI tools or models without organisational approval |
| What’s being used | Apps, SaaS tools, cloud storage, devices | AI chatbots, image generators, code assistants, AI automations |
| Typical examples | Dropbox, Google Drive, Trello, personal VPNs | ChatGPT, Claude, Midjourney, GitHub Copilot (used without approval) |
| Who usually adopts it | Employees trying to work faster or bypass slow IT processes | Employees trying to speed up thinking, writing, coding, analysis |
| Primary motivation | Productivity and convenience | Productivity, creativity, decision support |
| Data risk | Data stored outside approved systems | Data processed or trained on by external AI models |
| Security concerns | Data leakage, compliance violations, lack of access control | Data leakage, model training on sensitive data, hallucinations |
| Compliance impact | GDPR, ISO, SOC, internal IT policy breaches | GDPR, IP ownership issues, AI governance and auditability |
| Visibility to IT | Sometimes detectable via network or app usage | Much harder to detect (browser-based, personal accounts) |
| Speed of adoption | Gradual | Explosive |
| Control difficulty | Medium | High |
| Potential harm | Loss or exposure of data | Loss of data and flawed decisions based on AI output |
| Business risk level | Moderate to high | High to critical (depending on use case) |
| Typical organisational response | Blocking tools, introducing approved alternatives | Creating AI policies, approved AI stacks, usage guidelines |
The difference between Shadow AI and Shadow IT is that IT is any asset being used on your hybrid digital estate that you don’t know about. It could be hardware, or it could be software.
Shadow AI is specifically related to any application that is fully AI (or has an element of AI involved).
Arguably, Shadow AI is more dangerous to your business.
Unlike general IT, Shadow AI actively uses sensitive data, can retain and reuse that data unpredictably to train models, and lacks any transparency to audit how information is used.
Why does Shadow AI happen?
It sounds sinister. But Shadow AI rarely starts off with malice in mind.
Usually it’s because employees see tools that can make their life easier (and there are a lot of them around now), but don’t want to go through lengthy approval processes or sign offs.
So they just dive in and download free to use versions of applications to test them.
Sometimes departments or teams might put some spend into tools they think can be useful.
It could just be a salesperson downloading a recording tool for sales calls. Marketing using ChatGPT to create buyer personas. Or legal teams passing documents through AI or for doing research.
Very rarely, if ever, is it with the intent to cause harm to a business.
It could be seen as a good thing (employees looking for innovative ways to work) if it didn’t create a ton of potential risks.
What are the risks of Shadow AI?
One of the reasons Shadow AI is potentially more dangerous than Shadow IT (sounds the same, but isn’t) is because no-one is exactly clear on how the data being put into AI is stored or used.
This is largely because there is no standard “common practice” for processing personal data, which means every AI tool could potentially process information in different ways.
It’s also not clear where this information goes once you’ve put it into the app.
This is already attracting attention from regulators, who are quickly trying to issue guidance on how to use potentially sensitive data with AI.
The Information Commission is among those to warn about putting sensitive business or personal data into AI systems.
AI systems introduce new kinds of complexity not found in more traditional IT systems that you may be used to using. Depending on the circumstances, your use of AI systems is also likely to rely heavily on third party code relationships with suppliers, or both.
Information Commission
When it comes to the risks of Shadow AI, there are lots of things to think about:
- Data privacy and regulation
Definitely one of the biggest risks of Shadow AI is the potential for sensitive information to be breached or leaked, or the potential for security breaches to impact wider IT systems through the app.
When it comes to sensitive data getting out in the open, we’ve already seen this happen.
Back in August 2025, OpenAI had to remove a public sharing feature from ChatGPT after chats shared with the feature started appearing in Google search results.
At the time OpenAI said the feature “introduced too many opportunities for folks to accidentally share things they didn’t intend to”.
Imagine if sensitive commercial or personal information was included in these chats that suddenly could be found by anyone.
And considering regulations like GDPR, and the possible fines for breaching the rules, you can see why Shadow AI is a risk.
- Reputation damage
A knock on effect of the point above of the potential reputational damage you’d face if it came out you’d entered sensitive data into an AI tool without permission.
It would be even worse if that information then found its way into the public domain.
You have to ask how much trust you would lose with your customers if this happened.
How much would you trust a business that used your data like this?
- Lack of visibility
More than 81% of businesses say they lack visibility of AI usage, 65% say they recognise an increased security risk related to AI.
Not understanding what tools are in use, how they’re being used and how much they potentially cost is a similar problem to Shadow IT.
With no visibility there’s no way you can effectively manage or govern AI tools, possibly creating productivity problems and unpredictable costs.
The added layer of visibility problems with AI is a lack of visibility of what these tools are doing with your data.
Under rules like GDPR, businesses have a responsibility to create transparent audit trails of how information or data is collected, stored and used.
As a business you have an obligation to minimise how data is used, and give users the right to have their information erased if they ask. But by submitting this information into an AI tool, you immediately lose the ability to comply with these rules.
It’s hard enough complying with data regulations with no visibility of “general IT” systems. It’s impossible when you have no oversight of AI tools, and don’t understand how the tool is treating your data.
- Increase attack landscape
Shadow IT extends the attack surface of your business.
But with Shadow AI additional risks are created because employees are actively moving company or personal data into systems that you don’t control, can’t see and can’t secure.
A marketing manager pasting a customer list into Claude or ChatGPT, or a developer pasting source code into Copilot is bypassing your enterprise security, and you’d never know it.
There’s not even an audit trail. And new risks are created whenever data is added to an AI tool.
Think about it like this. When a SaaS tool is compromised, you get notified of a potential breach, you’ve got a contract to refer to and you know what data is being stored in the tool.
With Shadow AI you have none of that. You didn’t know the tool existed, you don’t know what data was uploaded because there’s no audit trail, you have no way of knowing for certain what data has been compromised, and you have no proof of compliance.
All within an attack surface that’s growing at pace.
How to manage the risks of Shadow AI
Managing the risks of Shadow AI has the same fundamentals as managing any Shadow IT in your business.
- Get on top of what tools are being used
You can’t manage what you can’t see, so the first step in getting control of AI tools is getting visibility of what’s in use.
The easiest way to do this is to use a solution like CerteroX, which provides an accurate view of your entire hybrid IT estate, including Shadow AI and IT.
By putting all your tools in one place, you can at least start to get an idea of the scale of your Shadow AI landscape and create a plan to manage it.
- Understand how they’re being used
One mistake you can make with managing Shadow AI is blanket banning everything.
This misses the opportunity to find tools your teams are using that actually have a benefit to your business.
Once you’ve identified what’s being used, take some time to understand how people or teams are using these platforms.
There could be duplicate tools in use that could be consolidated in a more effective way and then officially adopted and rolled out.
There could be genuine productivity and commercial benefits to some tools that you just need to set guidelines around (like access or data use, for example).
The point is, look for the opportunities in your Shadow AI estate instead of just assuming everything being used is bad.
- Set an allowed and unallowed list
While you might not want to just ban every AI tool, you’ll want to keep some control over what’s being used.
With the information you have on how tools are being used (and a better understanding of how they work) you can put together an authorised and unauthorised list of AI tools.
Admittedly this is difficult because it’s still early in understanding how AI tools work, which is why you want to get more control of tools rather than just banning them.
Look at how tools match up to your own compliance standards and what level of control you might need to create in order to match that compliance going forward.
For example, an AI tool might help with productivity, but you need to restrict what type of information can be put into it.
This isn’t just a one off practice.
You should continually review your AI estate – uncovering new Shadow AI as you go – to ensure what’s being used is controlled and adding benefits, rather than creating risk.
Managing innovation vs Shadow AI risk
There is an element of risk vs reward with Shadow AI.
With any fast moving technology there’s an element of unknown.
And AI is such a new and changing technology that it’s difficult to stay on top of how tools work, how they handle data and what potential risks they can introduce.
On the other hand, if used properly, with an element of control, they have the potential to transform processes and improve productivity massively in any organisation.
However you choose to handle AI in your business, the most important thing is having a clear picture of what’s in use, and how it’s being used so you can protect your business without limiting innovation.
Get a free demo of CerteroX and see how it can help you remove the mystery of Shadow AI so you can get the benefits without the risks.






