Key Takeaways
- Creating fake voice recordings (deepfakes) using AI is surprisingly easy and affordable.
- Scammers are using these AI-generated voices to try and access people’s bank accounts.
- These deepfakes are becoming harder to detect, posing a significant challenge for banks and individuals.
- Fraud losses linked to AI are expected to increase dramatically in the coming years.
- Experts advise caution with unexpected calls requesting personal information, even if the voice sounds familiar.
Even someone who isn’t tech-savvy can now create a convincing voice clone using readily available AI tools. A reporter from Business Insider demonstrated this by using a few seconds of their own recorded voice from a radio interview and a cheap AI service.
This generated voice, though slightly robotic, sounded eerily similar to the reporter’s own. It was used in a call to their bank, interacting with both automated systems and a human representative, apparently without raising suspicion.
This highlights a growing problem: scammers are exploiting cheap generative AI tools to impersonate people. Their goal is often to gain access to bank accounts or even open new ones fraudulently.
It’s not just about individual attempts anymore. Criminal organizations can use AI to automate these fake calls on a massive scale, targeting many people for smaller amounts. According to Business Insider, this technology helps fraudsters engineer scams more effectively.
The technology is advancing quickly. Deepfakes are getting better and harder to spot. A Deloitte report mentioned in the article predicts US fraud losses could hit $40 billion by 2027, up significantly from $12.3 billion in 2023, partly due to AI’s role.
Many banks are concerned. An Accenture survey found 80% of bank cybersecurity executives believe AI is boosting hackers’ abilities faster than banks can strengthen defenses.
Scammers often combine deepfaked voices with personal information, like debit card or Social Security numbers, which they might buy from data leaks found on the dark web. They can also use AI to create fake documents.
Ben Colman, CEO of Reality Defender, a company specializing in AI detection, told Business Insider that scammers play a numbers game. Automated attempts mean they only need occasional success to profit.
It’s not just the wealthy being targeted. Scamming lots of people out of smaller sums can be very profitable. FBI data shows the average online scam loss isn’t astronomical, but the volume is high across all age groups.
Detecting these fakes is tough. Even AI companies like OpenAI have struggled to create reliable detectors for AI-generated content. The quality of AI-generated voice, video, and images has improved dramatically.
During the reporter’s test call, the fake voice provided real (but easily obtainable through leaks) information like card numbers and asked to change account details. While the bank required some changes to be made online or at an ATM, the experiment showed potential vulnerabilities.
Authorities like the Financial Crimes Enforcement Network (FinCEN) and the Federal Reserve are issuing warnings about the increased risk of identity fraud fueled by AI deepfakes.
Experts advise people to be deeply suspicious of unexpected calls asking for money or information, especially those conveying urgency. Even if a voice sounds like a friend or family member, it’s wise to verify.
If a call seems to be from your bank, hang up and call the number on the back of your bank card directly to confirm the request’s legitimacy.
Limiting personal information shared online and using security tools like two-factor authentication can help reduce your risk, but no method is perfect.
Policy changes, like better international coordination on cybercrime laws and higher penalties for AI-driven fraud, are being discussed but will take time to implement.
Fraud prevention experts note that these attacks are often carried out by organized crime rings using automation. Banks need layered security and agility to combat these evolving threats, aiming to be less attractive targets than others.
While the reporter’s attempt to fully ‘hack’ their bank was partly blocked by security protocols, the ease with which the deepfake voice interacted with the system shows the clear and present danger these technologies pose.