Key Takeaways
- Over half (58%) of global workers intentionally use AI tools on the job, with many experiencing productivity gains.
- However, risky behaviors are common, including uploading sensitive data to public AI (48%) and using AI outputs without checking (66%).
- Many employees (61%) hide their AI use from employers, creating “shadow AI” risks.
- Only about a third of companies have clear AI usage policies, and less than half of employees have received AI training.
- Experts urge companies to implement clear guidelines, provide training, and foster an open environment for discussing AI use safely.
If you’ve ever asked ChatGPT to help with a work email or summarize a document, you’re in good company. Artificial intelligence is quickly changing how we work.
A new global study paints a detailed picture of AI adoption in the workplace. Polling over 32,000 workers across 47 countries, the research found that 58% are deliberately using AI tools for their jobs. About a third use these tools weekly or even daily.
Many who use AI report real benefits, finding it boosts their efficiency, helps access information faster, sparks innovation, and improves the quality of their work. These findings echo other studies showing AI can make employees and companies more productive.
General AI tools like ChatGPT are the most popular choice. According to the study published by The Conversation, around 70% of employees turn to free, public AI rather than tools provided by their employers.
But there’s a concerning side to this trend. Nearly half (47%) of employees using AI admit they’ve used it inappropriately. Even more worryingly, 63% have seen coworkers using AI in questionable ways.
A major red flag is how sensitive company information is handled. The survey revealed almost half (48%) of employees have fed sensitive company or customer data into public AI tools. Another 44% confessed to using AI in ways that violate company rules.
Relying blindly on AI is also widespread. Two-thirds (66%) said they’ve used AI-generated content without checking it for accuracy first. This complacency has led to mistakes, with 56% admitting AI caused errors in their work.
Younger workers, aged 18-34, seem more prone to these risky behaviors than their older colleagues. These actions carry significant risks, potentially leading to financial losses, damaged reputations, and breaches of privacy – issues already documented in some cases.
Adding to the challenge, many employees aren’t open about their AI usage. The study found 61% have avoided telling their employer when they use AI, 55% have passed off AI content as their own, and 66% used AI tools without knowing if it was permitted.
This hidden, or “shadow AI,” use makes it much harder for organizations to manage the associated risks. A lack of clear rules and training seems to be driving this behavior. Only 34% of employees said their company has an AI policy.
Pressure to keep up might also play a role, as half of the employees surveyed fear falling behind if they don’t adopt AI tools.
The findings highlight an urgent need for better AI governance in workplaces. Companies need to guide employees on how to use these powerful tools responsibly.
Investing in AI literacy through training is crucial. The research suggests that employees with better AI understanding are more likely to use the tools critically, verifying outputs and considering limitations.
Despite this, fewer than half of the employees surveyed reported receiving any AI-related training. Organizations need clear policies, accountability systems, and strong data privacy measures.
Creating a work environment where employees feel safe discussing their AI use is also vital. This transparency helps manage risks and encourages shared learning and responsible innovation.
AI holds great potential to improve work, but realizing these benefits safely requires an AI-savvy workforce, strong governance, and a culture of open, accountable use. Without these, AI could become more of a liability than an asset.