Key Takeaways
- AI-driven imposter scams are increasingly sophisticated, making it harder to trust digital interactions, especially in professional settings.
- People are adopting elaborate, sometimes “lo-fi,” verification methods out of caution when dealing with new online contacts.
- This heightened vigilance, while necessary, is creating an atmosphere of distrust and consuming significant time and resources.
- Job-related scams have surged, with reported losses climbing from $90 million to $500 million between 2020 and 2024, according to the US Federal Trade Commission.
- While technological solutions are being developed, common sense and awareness of “too good to be true” offers remain vital in spotting fraudsters.
Nicole Yelland, who works in public relations, now undertakes a multi-step background check before accepting meeting requests from unfamiliar individuals. Her process might involve using personal data aggregators, subtly testing language skills, or insisting on video calls where cameras must be on.
This cautious approach stems from a personal experience. Yelland shared with WIRED, as detailed in a report by Ars Technica, that she was once entangled in an elaborate job scam. The scammers impersonated a real company and even provided a convincing slide deck, but raised red flags during a video interview by refusing to turn on their cameras and making unusual requests for personal information.
Such digital imposter scams aren’t new, but they’re increasingly infiltrating professional channels, especially as remote work and distributed teams become more common. The same artificial intelligence tools promised to boost productivity are, unfortunately, also making it easier for criminals to create fake online personas in seconds.
On platforms like LinkedIn, distinguishing a real headshot from a polished, AI-generated fake can be challenging. Deepfake video technology has also advanced to the point where longtime email scammers are now attempting to impersonate people on live video calls.
The gravity of this trend is underscored by data from the US Federal Trade Commission. Reports of job and employment-related scams nearly tripled between 2020 and 2024, with actual financial losses increasing from $90 million to a staggering $500 million during that period.
This climate of deception has ushered in what some are calling an “Age of Paranoia.” Individuals are now employing a variety of old-fashioned social engineering tactics and simple checks to verify interactions. This might include asking someone to send an email while on a phone call with them, or using Instagram DMs to confirm a LinkedIn message’s authenticity.
Other methods include asking for a selfie with a timestamp or using pre-agreed code words with colleagues. Blockchain software engineer Daniel Goldman was so alarmed after hearing about a convincing deepfake video of a prominent figure that he advised his family to always verify any urgent requests for money or passwords via email, even if a voice or video call seems legitimate.
In the recruitment world, Ken Schumacher, founder of verification service Ropes, says hiring managers are also adapting. They might quiz candidates on local knowledge about the city they claim to live in or use what he calls the “phone camera trick” – asking to see the candidate’s laptop screen via their phone camera during a video call, to check for deepfake software.
However, these verification methods can have downsides. Honest individuals, particularly job candidates, may find such requests intrusive or off-putting, worrying about privacy or misinterpretation.
“Everyone is on edge and wary of each other now,” Schumacher observed. While diligent verification might seem like a good security practice, it can foster an atmosphere of distrust before any real connection is made, and as Yelland expressed, it’s a huge time drain: “I’m wasting so much time at work just trying to figure out if people are real.”
This challenge isn’t limited to the corporate sphere. Jessica Eise, a university professor studying social behavior, found her research team spending an “exorbitant” amount of time sifting through fraudulent responses to paid virtual surveys. They’ve had to shrink study sizes and revert to methods like snowball sampling and distributing physical flyers for recruitment.
Tech companies are trying to address the issue, with AI startups like GetReal Labs and Reality Defender offering deepfake detection. OpenAI CEO Sam Altman is also involved with an identity-verification startup, Tools for Humanity. Many in the blockchain space believe this technology holds solutions for identity verification.
But for now, many are finding that practical, if sometimes cumbersome, personal checks are their main line of defense. Barring a widespread, easy-to-use technical solution, a healthy dose of common sense can be invaluable. Yelland recalled that the fake job offer she received, upon closer inspection, promised pay significantly above the norm, unlimited vacation, and overly generous benefits—often a tell-tale sign of a scam.