AI Voice Cloning Fuels Surge in Deepfake Voicemail Scams Targeting Executives

AI Voice Cloning Fuels Surge in Deepfake Voicemail Scams Targeting Executives

Cybercriminals are increasingly turning to AI-generated voice cloning to carry out a new wave of deepfake voicemail scams, with several high-profile business executives already falling victim to sophisticated fraud campaigns across the UK, U.S., and beyond.

According to a recent report by cybersecurity firm Proofpoint, attackers are using synthetic voice technology to mimic company leaders, leaving convincing voicemails that instruct employees to urgently transfer funds or share sensitive information. In some cases, the cloned voices have even matched speech patterns, accents, and emotional tone, adding an extra layer of deception.

“This isn’t the future of cybercrime — it’s happening right now,” said Sherrod DeGrippo, Director of Threat Intelligence at Proofpoint. “The attackers are leveraging publicly available audio, like earnings calls or podcasts, to create near-perfect voice replicas.”

How the Scam Works

In most incidents reported so far, attackers first gather voice samples of company executives — often from public video interviews, podcasts, or social media posts. They then use AI voice synthesis tools, such as ElevenLabs, Resemble AI, or open-source software like Vall-E, to clone the voice.

The deepfake message is delivered via voicemail or even live phone calls, urging employees to:

  • Approve wire transfers
  • Share login credentials
  • Bypass normal financial protocols

One UK-based fintech firm reportedly lost over £450,000 after an accounts manager received a call from someone they believed to be the company CFO, asking for an emergency transaction.

Incidents and Investigations

Europol’s European Cybercrime Centre (EC3) has launched an investigation into cross-border incidents tied to AI-voice scams, as similar cases have been reported in France, Germany, and the Netherlands. In the U.S., the FBI issued a private industry alert (PIA) in late June 2025 warning companies to treat voicemail or voice-only requests with heightened scrutiny.

“Even with multi-factor authentication and encrypted messaging, all it takes is one convincing voice message to break protocol,” said FBI Cyber Division spokesperson Jenna Rhodes.

Expert Warnings

Security experts warn that deepfake audio is now cheap, fast, and accessible, with tools requiring just 30 seconds of clear speech to generate a usable clone. This has made voice-based social engineering far more dangerous than traditional phishing emails.

“Deepfake audio removes the human instinct to question,” said Dr. Noura Alkhatib, AI Ethics Researcher at King’s College London. “We’re wired to trust voices, especially those we know — which makes this tactic chillingly effective.”

Corporate Response & Mitigation

Businesses are being urged to:

  • Implement multi-layered approval processes for financial transactions
  • Train employees to verify all unusual voice requests via secondary channels
  • Deploy AI-based audio detection tools to identify deepfakes
  • Limit the public availability of executive voice data

Some firms have started using “voice passwords” or code phrases known only to internal staff, in a throwback to espionage-era authentication tactics.

Sources:

  • Proofpoint Threat Research Report (June 2025) – proofpoint.com
  • FBI Private Industry Alert (PIA-2025-0625) – fbi.gov
  • Europol EC3 Media Release (July 2025) – europol.europa.eu
  • King’s College London AI Ethics Lab Research Brief (July 2025)
  • BBC News Technology Desk – “AI Voice Fraud Hits UK Firms” (July 3, 2025) – bbc.com/news/technology
  • The Guardian – “Rise in AI Voice Cloning Scams Prompts New Warning from UK Cyber Watchdog” (July 4, 2025) – theguardian.com

Bottom Line:

Voice cloning scams powered by AI are no longer theoretical. With little technical skill and free access to AI tools, cybercriminals are successfully manipulating one of our most trusted human instincts — the sound of a familiar voice. For businesses and employees alike, the era of “don’t trust, verify” has officially arrived.