Many UK businesses could be unwittingly sitting on a data and privacy disaster as thousands of employees introduce Shadow AI to their IT landscape.
That’s the warning from Certero after a new Microsoft report found that 71% of employees have used unapproved AI tools at work, with more than half (51%) of those using these tools at least once a week.
As consumer apps like ChatGPT and Claude increasingly creep into the workplace through individual subscriptions, rather than IT approved versions, it’s leaving companies extremely vulnerable to privacy and security risks.
One of the more worrying findings in Microsoft’s “Rise of Shadow AI” report, is that the employees introducing the tools into their companies show little concern about the risks.
Only about a third of those surveyed said they were concerned about the privacy of any customer and company data put into AI tools. And only 29% were concerned about potential security risks created in company IT systems.
Is a lack of tech innovation part of the Shadow problem?
Among the reasons given for why they use their own AI tools at work, 28% of Microsoft’s respondents said their company doesn’t provide a work-approved option.
This is something we commonly hear from within companies that are slow to adapt to changing technology, leaving teams and employees to bring in their own tools with no oversight.
This type of unauthorised tool use has exploded in the last few years with the introduction of SaaS applications with free use models and Cloud.
Understanding the risks of Shadow AI
One of the biggest concerns around the unauthorised introduction of Shadow AI, is the lack of transparency over how data put into these tools is stored or used.
The Information Commission in the UK has already warned about and issued guidance around the concerns of putting sensitive data into AI systems.
They warn that “AI systems introduce new kinds of complexity not found in more traditional IT systems that you may be used to using”.
One difficulty they raise is that common practices about how to process personal data securely in data science and AI engineering are still in development.
They highlight that how one AI tool treats data can differ to another tool, and that as a company you have no way of knowing how data put into these tools will be dealt with.
This is especially concerning considering regulations like GDPR, which require you to have transparent audit trails over how you collect, store and use data in your business.
Complying with this when it comes to AI is difficult enough when you know about the AI tools being used. It’s impossible when it comes to Shadow AI, leaving you exposed to risk and possible fines.
Getting control of your hybrid IT environment is more crucial than ever
Shadow AI has raised the stakes when it comes to the dangers of unapproved IT assets.
Shadow IT has always been a cause for concern for any business but primarily from a cost and productivity perspective.
That’s not to say security concerns haven’t been raised before by Shadow IT.
You only have to look back to the Log4Shell incident in 2021, a critical vulnerability that allowed attackers to execute arbitrary code remotely on affected systems.
While the initial attack was a zero day event, it became a bigger issue that affected non-patched devices. With no visibility of IT, this is exactly the kind of attack that could impact your business.
But AI presents a completely new set of challenges due to the early nature of how tools are developed and trained on the data put into them, coupled with the rising use of unapproved tools.
It’s critical that businesses that haven’t already, start to review their ITAM practices, ensuring they have measures and tools in place to get complete visibility of their hybrid IT environments.
Having a clear view of everything used in your business – whether you know about it or not – is the only way you can reliably protect yourself from the threats posed by AI, or at least help you control how these tools are rolled out.
Speaking about the results of their survey, Darren Hardman, CEO, Microsoft UK & Ireland, said:
“UK workers are embracing AI like never before, unlocking new levels of productivity and creativity. But enthusiasm alone isn’t enough.
“Business must ensure the AI tools in use are built for the workplace, not just the living room.”
He added: “Only enterprise-grade AI delivers the functionality that employees want, wrapped in the privacy and security every organisation demands.”
At Certero, our AI-powered IT Asset Management software is here to help organisations get the visibility they need to effectively observe, manage and govern AI within their company, ensuring they’re able to embrace new innovations without sacrificing control.

