Key Takeaways
- Many employees, including over a third of IT staff, are using AI tools at work without company approval.
- This “shadow AI” creates significant security risks and can make skill gaps worse.
- Employees often hide their AI use because of insufficient company-provided tools, wanting a secret edge, or fearing they’ll seem incompetent or lose their jobs.
- Companies need to develop clear AI policies, provide better training, and improve security to manage AI’s growing presence.
As artificial intelligence finds its way into more workplaces, many organizations are finding it tough to manage this new technology responsibly, new research shows.
A report from TechRadar, discussing findings by Ivanti, reveals that the growing use of unauthorized AI tools is a major concern. It’s highlighting gaps in employee skills and ramping up security dangers.
The study found that over a third (38%) of IT workers admit to using generative AI tools not sanctioned by a PURE_MARKDOWN_UNESCAPED_HTML_TAG_OPEN!– –> href=”https://www.techradar.com/computing/artificial-intelligence/naughty-naughty-more-than-a-third-of-it-workers-are-using-unauthorised-ai-as-the-risks-of-shadow-tech-loom-large” target=”_blank”>their company. Nearly half of all office workers (46%) say some or all of the AI tools they use weren’t provided by their employers.
Interestingly, while 44% of companies have started using AI across different departments, many employees are still secretly using unapproved tools. A key reason seems to be a lack of proper training on company-approved options.
One out of three workers mentioned they keep their AI usage quiet from management. Some do this because they feel it gives them a “secret advantage,” while others don’t want to be seen as incapable.
This hidden use of AI is also fueling workplace anxiety. About 27% of workers reported feeling like an imposter due to AI, and 30% are worried that AI might eventually replace their roles. This disconnect contributes to stress and burnout.
These behaviors point to a bigger issue: a lack of trust and open communication. It underscores the urgent need for businesses to create clear and fair rules about using AI.
“Organizations should consider building a sustainable AI governance model, prioritizing transparency,” commented Ivanti’s Chief Legal Counsel, Brooke Johnson. She also stressed tackling “the complex challenge of AI-fueled imposter syndrome through reinvention.”
The secret use of AI also brings serious security threats. Without proper oversight, unauthorized tools can accidentally leak sensitive data, sidestep security measures, and leave company systems vulnerable to attacks. This is especially risky when employees with high-level access use these tools.
The solution isn’t to simply ban these tools. Instead, companies need to modernize. This means creating straightforward AI policies that include everyone and investing in secure technology, like strong protections for devices and systems that strictly control access, especially with more people working remotely.
Ivanti suggests that AI itself isn’t the enemy. The real problems are fuzzy policies, weak security, and a breakdown of trust. If these issues aren’t addressed, this “shadow AI” could deepen skill divides, worsen employee mental health, and put critical systems at risk.