´óÏó´«Ã½

Fake voices 'help cyber-crooks steal cash'

  • Published
Sound wavesImage source, Getty Images
Image caption,

Convincing fakes of audio are easier to generate than video spoofs

A security firm says deepfaked audio is being used to steal millions of pounds.

Symantec said it had seen three cases of seemingly deepfaked audio of different chief executives used to trick senior financial controllers into transferring cash.

Deepfakes use artificial intelligence to create convincing fake footage.

The AI system could be trained using the "huge amount" of audio the average chief executive would have innocently made available, Symantec said.

Corporate videos, earning calls, media appearances as well as conference keynotes and presentations would all be useful for fakers looking to build a model of someone's voice, chief technology officer Dr Hugh Thompson said.

"The model can probably be almost perfect," he said.

Image source, Getty Images
Image caption,

A deepfake of Facebook boss Mark Zuckerberg was widely shared on the social network

And they had used background noise to cleverly mask the least convincing syllables and words.

"Really," said Dr Thompson, "who would not fall for something like that?"

Dr Alexander Adam, a data scientist at AI specialist Faculty, said it would take a substantial investment of time and money to produce good audio fakes.

'Training the models costs thousands of pounds," he said.

"This is because you need a lot of compute power and the human ear is very sensitive to a wide range of frequencies, so getting the model to sound truly realistic takes a lot of time."

Typically, he said, hours of good quality audio was needed to help capture the rhythms and intonation of a target's speech patterns.