Phishing has been ranking among the most commonly performed cyber attacks for a long time. Increasingly aware of its potential consequences, many organizations are now equipping themselves with anti-phishing solutions and training their employees to recognize phishing emails.
Unfortunately, this might not be enough because phishers have new not-so-secret weapons: artificial intelligence and machine learning. They use these sophisticated technologies to create deepfake-enhanced phishing attacks that are much harder to detect than their traditional counterparts.
To help you prepare for your first encounter with deepfake technology, we explain what it is, how it’s being used, and what you can do to protect your organization.
About Deepfake-Enhanced Phishing Attacks
In March 2021, the FBI released a Private Industry Notification (PIN) report to warn about the rising threat of synthetic (computer-generated) content.
“Malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months,” the report said. “Foreign actors are currently using synthetic content in their influence campaigns, and the FBI anticipates it will be increasingly used by foreign and criminal cyber actors for spear-phishing and social engineering in an evolution of cyber operational tradecraft.”
Just like even relatively unskilled cybercriminals can now purchase distributed denial-of-service (DDoS) or ransomware attacks as a service on the dark web, it’s becoming easier and easier for them to create realistic images, videos, and audio using artificial intelligence and machine learning.
The chances are that you’ve seen the 2018 video of Barack Obama commenting on the movie Black Panther and insulting Donald Trump. That video was, of course, a deepfake.
Since then, the quality of synthetic content has only increased, and so has the number of successful phishing attacks that used deepfakes to trick unsuspecting victims.
Deepfake Phishing Attacks Are a Real Threat
Despite not being widely recognized as a serious cybersecurity threat, deepfake phishing attacks have already caused a lot of damage, and we don’t have to look far for examples that illustrate how damaging and difficult to detect they can be.
One particularly costly deepfake phishing attack happened in 2020. A bank manager in Hong Kong received a call from what he believed was a familiar company director. So, when the company director requested a $35 million bank transfer to be authorized, he went right ahead. In reality, the request came from a phisher who had used the so-called deep voice technology to clone the director’s speech.
Matthew Canham, a University of Central Florida research professor and cybersecurity consultant, provided another example at the 2021 Black Hat security conference to illustrate the phishers also target what they perceive to be small but easy prey.
“My friend Dolores got a series of text messages from her boss to buy gift cards for 17 employees for the upcoming holiday party—and not to tell anyone,” Canham said, “Dolores bought the gift cards, the party came, and the boss didn’t know anything about it.”
The boss didn’t remember asking his employee to buy any gift cards because that conversation never happened. In reality, Dolores became a victim of an SMS-based deepfake attack involving a spoofed phone number and a chatbot designed to imitate the boss’s writing patterns.
Improve Your Ability to Spot Synthetic Content vs Deepfake
What the above-described and other similar deepfake phishing attacks illustrate is that anyone—from executives to small business employees—can become a target. That’s why organizations should train their employees for the first encounter with this cybersecurity threat.
While increasingly convincing, deepfake phishing attacks are still not perfect, and there are certain things that should raise suspicion:
The tendency of deepfake speech samples to sound slightly robotic is what prevented a phishing attempt on a tech company in 2020. An employee noticed that the audio message he received from who was claiming to be the CEO of his company doesn’t sound entirely natural, and he alerted the legal department, which promptly confirmed his suspicion.
In addition to deepfake-specific cybersecurity awareness training, organizations should also put in place policies to prevent a single fake request from kicking off a cascade of undesirable events. In practice, this can be as simple as requiring employees to slow down and always verify all sensitive requests using a different communication channel regardless of how urgent they may be.
In the future, organizations large and small will likely have to use artificial Intelligence to recognize deepfake content created by artificial intelligence, such as the tool unveiled in 2020 by Microsoft, just like they currently use spam filters to prevent spam messages from reaching employees’ inboxes.
Conclusion on Deepfake Attacks
The era of artificial intelligence and machine learning has arrived and with it a new generation of phishing attacks. Instead of hoping that deepfake-enhanced phishing attacks won’t target your organization, you should prepare for the worst by educating your employees and putting in place anti-phishing policies.
We at OSIbeyond would be happy to guide you along the way. Contact us for more information.