Key Takeaways
- AI is now a common workplace tool, with many employees using it proactively.
- Despite boosting efficiency and innovation, there are significant concerns about trust and hidden use.
- A large number of users don’t verify AI-generated content, and many hide their use of AI.
- Companies are lagging in providing AI training and clear usage policies.
- Risky behaviors with AI, like sharing sensitive data, are prevalent and demand better governance.
Artificial intelligence has firmly entered our work lives. A global survey by Melbourne Business School and KPMG, involving over 48,000 people across 47 countries, reveals that AI is becoming a genuine partner in the workplace.
Nearly six out of ten employees are using AI on their own initiative, with a third of them turning to it at least weekly. The upsides are clear: saving time, easier access to information, and a notable spark for innovation. Almost half also feel AI has helped increase revenue-generating activities.
However, this AI integration isn’t without its shadows. Some workers question if tasks completed with AI can truly be considered their own work. Others worry about how colleagues might judge them if they discover their reliance on these new tools.
AI is reshaping how we produce and collaborate, prompting a re-evaluation of individual roles and skills. This has led to a widespread habit of “hidden AI use,” with 57% of employees admitting they present AI-generated content as their own, without crediting the tool.
Alarmingly, two-thirds of users don’t even check the answers AI provides, which can lead to errors in their work. This tendency is partly due to a significant lack of official guidance.
Fewer than half of employees report receiving any AI training, and only 40% say their company has a clear policy on using these tools. This is compounded by a growing pressure, as half of all respondents fear falling behind professionally if they don’t quickly get to grips with AI.
“The findings reveal that AI at work is delivering performance benefits but also opening up risk from complacent and non-transparent use,” said Nicole Gillespie from the University of Melbourne, commenting on the survey results published by Free Malaysia Today.
The survey highlighted some risky habits: nearly one employee in two confessed to entering sensitive company data into public AI tools like ChatGPT. Furthermore, 44% admitted to bypassing their company’s internal policy, opting for these public solutions over organisation-approved ones.
Younger employees, aged 18 to 34, are particularly inclined towards these incautious practices. Such actions expose both individuals and their organizations to serious risks, including significant financial losses, reputational damage, and data confidentiality breaches.
There’s an urgent need to strengthen how AI is governed in the workplace. David Rowlands from KPMG emphasized this, stating AI is “the greatest technology innovation of a generation, so it’s crucial that AI is grounded in trust.” He added that organizations must ensure the AI revolution happens responsibly for a future where the technology is “both trustworthy and trusted.”