Key Takeaways
- AI leader Anthropic, maker of the Claude chatbot, surprisingly bans AI use in its job applications.
- The company states it wants to evaluate genuine personal interest and unassisted communication skills.
- This policy creates an ironic situation, as AI firms typically advocate for widespread AI adoption.
- Recruiting experts suggest this approach helps gauge authentic human abilities in an increasingly AI-influenced world.
One of the top artificial intelligence companies, Anthropic, is taking a firm stance: no AI assistance when you apply for a job there. This is noteworthy as Anthropic is a leading AI lab, known for its advanced chatbot, Claude.
When applying for a role at Anthropic, candidates are asked to write a short essay explaining their interest in the company. Critically, a job posting for an economist highlighted, “please do not use AI assistants during the application process.”
Anthropic explains they “want to understand your personal interest… without mediation through an AI system” and “evaluate your non-AI-assisted communication skills.” Applicants must actively agree to this condition.
This rule isn’t just for specific roles; it extends to highly technical positions like machine learning engineers and research scientists. Ironically, one such role is for “Research scientist/engineer, honesty,” where AI use in the application would directly contradict the job’s focus.
The policy is striking because AI companies usually champion the use of AI in all aspects of life, often warning that failing to adopt AI will leave people behind. So why block it in their own hiring?
Jose Guardado, a tech recruiter with experience at Google and Andreessen Horowitz, shared his perspective with Business Insider. He noted that “AI is circumventing the evaluation of human qualities and skill,” leading to a “bit of a backlash.”
Guardado believes “the pendulum is swinging more toward humanities and authentic human experiences.” In a world where AI can write code, communication and the ability to articulate plans in human language become even more crucial.
Requiring candidates to write unaided helps companies “truly get a measure of applicants,” Guardado explained. It’s a way to find authentic signals amid concerns about AI being used to cheat on tests or coding interviews.
He also pointed out the irony: “It’s ironic that the maker of Claude is at the forefront of this.” For now, banning AI in applications might be a practical way to assess candidates. Anthropic’s competitor, OpenAI, does not appear to have a similar ban. According to Business Insider, neither company responded to requests for comment on this policy.