AI Gives Scammers an Unsettlingly Human Touch

Key Takeaways

  • Generative AI is making spam and phishing emails look much more realistic by eliminating common spelling and grammar mistakes.
  • Scammers can now use AI to easily create messages in specific languages and dialects they previously couldn’t target effectively.
  • AI chatbots are being used to start romance scams, convincingly mimicking genuine interest before human scammers take over.
  • Cheap and effective AI-generated fake audio calls are already being used to trick people, such as by impersonating colleagues.
  • While realistic real-time fake video calls may still be developing, experts believe they could become common within a few years.
  • These advancements highlight a growing need for better ways to verify people’s identities online.

Spam is getting a significant upgrade thanks to generative AI. These tools allow scammers to create slick, convincing messages, moving past the poorly written emails that used to be easy red flags.

In fact, the quality has improved so much that perfect grammar might now be suspicious. Chester Wisniewski from security firm Sophos noted that spelling and grammar errors in spam have dropped sharply. He joked to The Register during the RSA Conference, “if the grammar and spelling is perfect, it probably is a scam, because even humans make mistakes.”

AI also breaks down language barriers for fraudsters. Previously, scammers focused on common languages. Now, AI makes it simple to craft scams in various languages and even specific dialects, like Québécois French or European Portuguese, reaching audiences they used to ignore.

This makes it harder for people in regions like Quebec or Portugal, who could previously spot scams written in the wrong dialect, to identify fraudulent messages.

“From the criminal enterprise perspective, it’s opened the world,” Kevin Brown from NCC Group told the publication. He added that AI bypasses traditional phishing training focused on spotting poor language and obvious errors.

The technology is also proving effective in the initial stages of romance scams. AI chatbots can handle early conversations, building rapport and appearing empathetic before a human operator steps in to solicit money or push investment schemes.

Beyond text, AI-powered audio deepfakes are a current threat. Wisniewski mentioned scenarios where scammers use AI-generated voices, mimicking someone like an IT staff member, to call employees and trick them into revealing passwords. “You can do real-time audio deepfakes for pennies,” he stated.

Real-time video deepfakes, however, might still be a little way off for widespread criminal use. Wisniewski expressed skepticism about current capabilities, suggesting that reports of major scams using video fakes might be exaggerated. He thinks truly convincing, interactive video fakes are complex even for major tech companies.

He estimates it could be about two years before criminals can affordably use sophisticated video deepfakes, and perhaps three before they become common prank tools. Brown, however, noted that NCC Group’s security testers have already had some success creating video deepfakes for specific situations.

While the exact timeline is debated, both experts agreed that technology is rapidly advancing. This evolution means we’ll increasingly need stronger methods to verify who we’re actually communicating with online, moving beyond today’s systems.

Independent, No Ads, Supported by Readers

Enjoying ad-free AI news, tools, and use cases?

Buy Me A Coffee

Support me with a coffee for just $5!

 

More from this stream

Recomended