Key Takeaways
- Companies hold diverse views on candidates using generative AI (GenAI) tools during the hiring process.
- Some employers permit or even welcome AI for tasks like resume writing and certain assessments, valuing AI proficiency.
- Others ban GenAI in tests and interviews to ensure a fair assessment of an individual’s inherent abilities.
- Job applicants themselves have mixed feelings, with some embracing AI openly while others worry about perception.
The rise of generative AI is shaking up the job application scene, leaving both companies and candidates navigating new rules of engagement. For job seekers, tools like ChatGPT can be powerful allies, but their use is met with a spectrum of reactions from potential employers.
Jonathan Chan, a final-year student, encountered a strict no-GenAI policy during a timed coding test for a local bank. A pop-up even informed him his eye movements would be tracked. Despite this, he successfully completed the test and secured the role, according to The Straits Times.
However, Jonathan’s experience isn’t universal. Other applicants reported no such restrictions. Benjamin Lee, a communications student, used ChatGPT to generate ideas for an internship campaign plan, believing it acceptable as long as the output wasn’t copied wholesale. He also found AI helpful for tailoring resumes and interview preparation.
Another graduate, Ms. Lim Zi Yi, used GenAI to translate and analyze a traditional Chinese assessment for a content analyst role. When she candidly mentioned her AI use during the interview, the interviewer praised her ability to work with AI effectively and offered her the job.
Ms. Lim’s employer isn’t alone. AI Singapore, for instance, allows applicants to its apprenticeship program to use AI tools during technical assessments. They view AI as an increasingly standard part of many jobs, including AI engineering, but have safeguards to ensure fair evaluation.
Recruitment agencies are also adapting. Josh Lim from Robert Walters Singapore stated they are open to candidates using GenAI for preparation, provided applications reflect genuine experiences. Some clients prefer monitored tests, while others trust candidates’ integrity.
Similarly, Ms. Sim Yunying of The Talent Detective said her firm doesn’t actively police AI use, emphasizing that candidate integrity is paramount. She believes proficiency with AI can indicate how well one might use it on the job, though human judgment remains crucial, especially for creative tasks.
On the other hand, not all employers are comfortable with AI in assessments. UOB, as explained by its head of group human resources Dean Tong, designs its recruitment to assess critical thinking and problem-solving without AI. They aim for fair evaluation but train new hires on responsible AI use, acknowledging policies may evolve.
Recruitment firm Randstad also prioritizes assessing a candidate’s genuine abilities without AI intervention. David Blasco, its Singapore country director, highlighted the difference between minor AI assistance and relying on it to create entire applications. He believes trained recruiters can spot over-reliance on AI if candidates struggle with follow-up questions.
Despite growing acceptance in some quarters, many job applicants remain hesitant to be transparent about their AI usage. Nathan Foo, a business major, heavily relies on AI for school and work tasks and would feel disadvantaged if barred from using it. Yet, he requested anonymity, fearing his AI dependence might be viewed negatively due to an “unspoken rule.” This suggests a lingering taboo around extensive AI use in professional settings.
As GenAI tools become more sophisticated and widespread, the debate over their role in hiring continues. Companies and job seekers alike are still figuring out the balance between leveraging powerful new technology and ensuring fairness and authenticity in the application process.