Deepfake: we can no longer trust the voice

The simulation of the voice through AI allows deepfake attacks with potentially deleterious consequences: this is what happened to a company.

How far cybercrime can go with Deepfake Attacks

First they came with viruses, and they took away the security on our PC. Then came the worms, and they took away the security in other people’s mail. Then came phishing and took away security in the identity of others. Finally came the deepfake attacks took away any remaining security. Because even if we can no longer trust the voice, then in discussion there is a large part of the interactions that people usually have, considering that kind of experience has always been reasonably safe.

To pay the costs was a company with offices in the United Kingdom and Germany, where a fake phone call cost $ 243,000. It goes far beyond the telephone joke and the ability of an imitator: all this has happened by dropping the victim under the blows of a fake orchestrated thanks to the Artificial Intelligence. In short, the new frontier of cybercrime is served.

Deepfake: this is how the attack happened

A call comes from the English branch from what appears to be the group’s chief executive officer: in English, with a German accent, with a voice that perfectly embodies the CEO’s stamp, an order is given to make a money transfer to a Hungarian supplier with lots of details on the practices that would follow. Those who received the call admitted that they had not had the slightest doubt about the soundness of the voice and the identity of the interlocutor, so he executed the commands bringing the payment to an end immediately.

On the other hand, doubts arose when a further call asked for a further payment of similar caliber: at this point the coordinator of the English branch stopped the practice and raised doubts about what was being done. Hence the discovery of what happened, the denunciation and the beginning of investigations by the investigators.

The incident, which occurred in March at a company that remained anonymous, would be an example and probably the avant-garde of what is a new mode of attack using AI: thanks to deepfakes, you could move capital, votes, strategic decisions, diplomacies and so on. The certification of identity therefore becomes crucial in perspective because thanks to AI it is no longer absolutely possible to hypothesize that a computer cannot perfectly imitate another person’s voice, simulating a conversation and thus succeeding in completing a far deeper phishing it’s dangerous.

SHARE
Previous articleFacial recognition technology is coming on Facebook
Next articleFacebook disables facial recognition by default, instead of automatically scanning user faces
Jiya Saini
Jiya Saini is a Journalist and Writer at Revyuh.com. She has been working with us since January 2018. After studying at Jamia Millia University, she is fascinated by smart lifestyle and smart living. She covers technology, games, sports and smart living, as well as good experience in press relations. She is also a freelance trainer for macOS and iOS, and In the past, she has worked with various online news magazines in India and Singapore. Email: jiya (at) revyuh (dot) com

LEAVE A REPLY