How to Recognize AI-Generated Fakes: A Guide for Businesses

Publication date: Apr 26, 2024

Last Published: Apr 26, 2024

Table of Contents
Read Time : 6 minutes

In one of our previous blog posts, we explored the growing sophistication and proliferation of deepfakes and their potential to wreak havoc on businesses. Indeed, fake documents, images, videos, and audio now play a significant role in various cybercriminal activities, including phishing, identity theft, reputation attacks, and fraud. However, the good news is that deepfake detection methods are also evolving rapidly, and it’s now possible for businesses to recognize AI-generated fakes using readily available tools and knowledge of peculiarities and inconsistencies typical of synthetic media.

Hot to Detect Fake Images

Fake images can be used to deceive and manipulate individuals and businesses in a number of different ways. For instance, they can be used to create convincing social media profiles for phishing, spoof brands for identity theft, or conduct disinformation campaigns.

While the quality of AI-generated images has increased dramatically in recent years, there are still some telltale signs that an image has been generated by artificial intelligence, such as:

  • Extra, missing, or misshapen fingers.
  • Unreadable or nonsensical text. 
  • Clothes and accessories that don’t make sense (buttons in strange places or necklaces that appear to float in mid-air).
  • Lack of consistency across multiple images with the same subject. 
  • Distorted background objects and geometry. 

Many AI tools have been trained on AI-generated images to recognize these subtle patterns and imperfections that separate them from real images. One such tool is the Hive Al Detector, a well-rated browser extension that can detect images generated by MidJourney, DALL-E, Stable Diffusion, and other popular models. 

AI Generated Headshot with Canva

Organizations whose employees are at a hightened risk of encountering deepfake images, such as fake profile images on LinkedIn, should consider equipping their employees with a similar tool so they can quickly check if an image is real or not instead of having to rely solely on their own judgment. 

Additionally, implementing regular training sessions with practical exercises that test the ability to recognize fakes can be immensely beneficial. Organizations can take advantage of resources like the Detect Fakes website, which provides a series of images that have been manipulated using AI, as well as non-altered images, and asks users to identify which ones are fake. 

How to Recognize Synthetic Videos

While fake videos lag behind images in terms of prevalence due to their complexity, there have already been attacks where they were successfully used. 

One such example is an attack on a finance worker at a multinational firm who was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call.

To recognize synthetic videos, the same telltale signs that apply to images apply to videos (since videos are just series of images). However, there are also at three additional sigs to look out for:

  • AI-generated characters may not display emotions or facial expressions that align naturally with the spoken dialogue, as can be clearly seen in a recent deepfake video of Volodymyr Zelenskyy.
  • In synthetic videos, the synchronization between what is being said and the movements of the lips or facial expressions can be off. This lag, no matter how slight, is a red flag.
  • Variations in video quality within the same clip can also indicate portions have been altered or generated.

However, the main thing is for employees to realize that videos can be faked, so just because a highly unusual request is made via a video call doesn’t mean the request doesn’t need to be verified via a different channel. 

How to Identify Deepfake Audio

Tools like ElevenLabs make it possible to clone someone’s voice by providing just a short audio sample. This technology can be used for various malicious purposes, such as impersonating a CEO or other high-level executive to authorize fraudulent transactions.

One such incident was reported in 2019. A group of fraudsters used voice-generating AI software to mimic the voice of the chief executive of a U.K.-based energy company’s parent company to facilitate an illegal fund transfer of $243,000. After the money had been transferred, it was forwarded to an account in Mexico and then other locations.

While tools to detect deepfake audio exist, such as Pindrop SecurityAI or Not, and AI Voice Detector, they, according to a recent NPR experiment, often don’t work well in practice. “Our experiment revealed that the detection software often failed to identify AI-generated clips, or misidentified real voices as AI-generated, or both,” writes NPR.

Because the accuracy of AI voice detection can be a hit or miss and because it’s impractical or impossible to use the tools during real-time voice calls, the best current strategy for dealing with deepfake audio is to train employees to treat unusual requests with suspicion and verify them via a second channel. 

For example, if an employee receives a call from someone claiming to be a high-level executive requesting an urgent wire transfer, they should hang up and call the executive back at a known and verified phone number to confirm the request. 

How to Spot AI-Generated Text

While AI-generated text isn’t inherently more dangerous than text written by humans, it’s still useful to be able to recognize it because of how easily it can be misused for cybercrime. 

Phishers, many of whom are not native English speakers, have embraced AI-generated text as a tool to create more convincing emails and chat messages, so when someone who has no reason to use AI is sending messages that are clearly AI-generated, it’s possible that they may not be who they claim to be. 

To spot AI-generated text, you should keep in mind the following: 

  • AI-generated text often provides responses that seem informative but lack specific details or direct relevance to the query.
  • AI can sometimes repeat the same point in different ways throughout the communication. 
  • Unlike human writers, who might make occasional grammatical errors, AI typically displays flawless grammar.
  • Text generated by AI tends to maintain a high level of politeness and formality. Phrases like “I hope this message finds you well” can appear frequently.
  • Specific transitional phrases like “However,” “Moreover,” “Additionally,” and “It’s important to note” are used unnaturally often by AI.

The best way to become familiar with the characteristics of AI text is to spend some time using AI tools like ChatGPT or Google Gemini. After just a few days of interacting with these tools, their output becomes easily recognizable.

In addition to becoming familiar with AI text through personal experience, there are also AI text detectors available that can help identify machine-generated text. Popular examples include GPTZeroCopyleaks, and Scribbr. However, it’s important to use these tools with caution, as they can give false positive results.

Conclusion 

The rise of generative AI has equipped cybercriminals with dangerous tools. For organizations to stay ahead of the curve, it’s important to become familiar with their output and upgrade their defenses accordingly to protect themselves from various cybercriminal activities that take advantage of them, including phishing, identity theft, reputation attacks, and fraud.

Related Posts: