As AI technology advances, the emergence of deepfakes poses an ever-evolving threat. These manipulated images, videos and audio use artificial intelligence to create convincing but false representations of people and events.
Of particular interest is voice spoofing, also known as voice cloning, which uses AI to create a realistic-sounding recording of someone’s voice. Fraudsters have used voice deepfakes to impersonate familiar voices such as a family member or a bank representative, trick consumers into handing over money or provide sensitive information.
In a recent incident, scammers tricked some grandparents into thinking their grandson was in jail and needed money for bail, using a replica of his voice to beg for help.
“We got sucked in,” the poor grandma recalled The Washington Post. “We were convinced we were talking to Brandon.”
How do you protect yourself from such sophisticated trickery?
“Consumers should be wary of cold calls saying a loved one is in danger or messages asking for personal information, especially when it comes to financial transactions,” said Vijay Balasubramaniyan, co-founder and CEO of Pindropa voice verification and security company that uses artificial intelligence to protect businesses and consumers from fraud and abuse.
He gives these five signals that the voice on the other end may be AI.
Related: How Deepfake Tech Can Influence The Journalism Industry
Look for long pauses and signs of a distorted voice
Deepfakes still require the attacker to type sentences that will be converted into the target’s voice. This often takes time and results in long breaks. These pauses are distressing to the consumer, especially if the request on the other end of the line is urgent and involves a lot of emotional manipulation.
“But these long pauses are telltale signs of a deepfake system used to synthesize speech,” says Balasubramaniyan.
Consumers also need to listen carefully to the voice at the other end of the line. If the voice sounds artificial or distorted in any way, it could be a sign of a deepfake. They should also be wary of unusual speech patterns or unfamiliar accents.
Be skeptical of unexpected or anomalous requests
If you receive a call or message that doesn’t belong to the person you know or the organization you’re contacting, it could be a sign of a deepfake attack. Especially if you are subjected to emotional manipulation and high-pressure tactics that try to force you to help the caller, hang up and call the contact back on your own using a known phone number.
Check the caller’s identity
Consumers should ask the caller to provide personal information or verify their identity through a separate channel or method, such as an official website or an email. This can help confirm that the caller is who they claim to be and reduce the risk of fraud.
Stay up to date with the latest deepfake technology
Consumers need to stay abreast of the latest advances in voice deepfake technology and how fraudsters use it to commit scams. By staying informed, you can better protect yourself against potential threats. The FTC lists the most common phone scam on their website.
Invest in liveness detection
Liveness detection is a technique used to detect an attempted spoof by determining whether the source of a biometric sample is a live human or a fake. This technology is offered by companies like Pindrop and others to help companies detect whether employees are talking to a real human or a machine masquerading as a human.
“Consumers should also make sure they are doing business with companies that are aware of this risk and have taken steps to protect their assets with these countermeasures,” says Balasubramaniyan.