AI voice scammers are posing as loved ones to steal your money — here’s a foolproof trick to stop attacks

Audio logo hovering above a phone.
Techsperts have revealed a foolproof way to separate real people from their AI clones.
  • Oops!
    Something went wrong.
    Please try again later.

Here’s how to thwart a real-life attack of the clones.

Artificial intelligence’s light-speed advancement has made people easy prey for the increasingly prevalent AI voice scams. Fortunately, techsperts have revealed a foolproof way to differentiate humans from these digital phone clones — by asking them for a safe word.

“I like the code word idea because it is simple and, assuming the callers have the clarity of mind to remember to ask, nontrivial to subvert,” said Hany Farid, a professor at the University of California, Berkeley, who has researched audio deep fakes, the Scientific American reported.

“Right now there is no other obvious way to know that the person you are talking to is who they say they are,” said Hany Farid, an audio deepfake expert at the University of California, Berkeley. chee siong teh – stock.adobe.com
“Right now there is no other obvious way to know that the person you are talking to is who they say they are,” said Hany Farid, an audio deepfake expert at the University of California, Berkeley. chee siong teh – stock.adobe.com

Thwarting AI has become paramount given the plethora of AI phone scams, in which cybercriminals employ cheap AI tools to parrot family member voices to dupe people into providing them bank account numbers and other valuable information.

The voices are often reconstructed from the briefest of sound bytes harvested from the victim’s social media videos.

The most notorious of these AI-mpersonation schemes is “caller-ID spoofing,” in which phone scammers claim they’ve taken the recipient’s relative hostage and will harm them if they aren’t paid a specified amount of money.

This cybernetic catfishing software has gotten so state-of-the-art that the AI voices are often indistinguishable from those of loved ones.

“I never doubted for one second it was her,” Arizona mom Jennifer DeStefano described while recalling a bone-chilling incident in which cybercrooks cloned her daughter’s voice so they could demand a $1 million ransom.

Circumventing these familial infiltrators requires — not bot-sniffing dogs — but flipping the script on the computer and asking it for the password.

This comes amid an increase in AI clone scams, in which criminals use the tech to mimic family members asking for money over the phone. Getty Images/iStockphoto
This comes amid an increase in AI clone scams, in which criminals use the tech to mimic family members asking for money over the phone. Getty Images/iStockphoto

The experts at Scientific American suggest inventing a special safe word or private phrase that only your family members would know and then sharing it with each other in person.

If they call asking for money in an emergency, the person on the other hand can request this offline password, thereby allowing them to see through potential lies.

“Right now there is no other obvious way to know that the person you are talking to is who they say they are,” said Farid, who advises giving family members periodic “safe word” pop quizzes so they won’t forget it.

Who knew that advanced AI could be duped the equivalent of your older brother blocking the hallway and demanding the password?

These AI clones can mimic a family member’s voice to a tee by using sound bytes from their social media videos. andreusK – stock.adobe.com
These AI clones can mimic a family member’s voice to a tee by using sound bytes from their social media videos. andreusK – stock.adobe.com

Experts analogized the method to safe words parents teach their kids to thwart kidnappers masquerading as friends picking them up from school.

People can even use their childhood codewords to catch AI-mposters as well, they claim.

Of course, safe words aren’t the only way to detect a clone in sheep’s clothing.

Other red flags include unexpected calls demanding financial action, intrusive and artificial-sounding background noises that seem to be looped as well as inconsistencies in the conversation.

“Voice-cloning technology often struggles to create coherent and contextually accurate conversations,” digital payment solutions firm Takepayments warns. “If the ‘person’ on the other end contradicts themselves, gives information that doesn’t align with what you know, or seems to dodge direct questions, it’s a reason for concern.”

Cyberfraudsters will often request to be paid in cryptocurrency as “it’s impossible to trace the identity of who money is being sent to,” per the site.

“Any requests for funds via common digital currencies, like Bitcoin or Ethereum, should be treated as highly suspicious,” they write.