
AI deepfakes are rapidly emerging as one of the most deceptive threats in cybersecurity today. Using advanced generative technology, cybercriminals can mimic faces, voices, and even writing styles to trick victims into revealing sensitive credentials. As these synthetic attacks become harder to detect, understanding how to recognize and defend against deepfakes is essential to protecting your business.
The Threat of AI Deepfakes in Cybersecurity
In truth, almost any AI model can be corrupted for malicious purposes. The latest generative AI models mimic human thought and if equipped with advanced capabilities such as text-to-video or text-to-audio, cybercriminals can use it to create synthetic audio and video files to commit online fraud, called deepfakes.
A trusted face, voice, or both are convincingly replaced or manipulated to get unsuspecting users to divulge critical data. Attackers are getting so good at creating deepfakes with AI that they are almost undetectable – almost.
Which brings us to the dark side of AI… What can cybercriminals do with the data they glean from the AI tools your employees use, and what capabilities does AI give them to better impersonate a legitimate user?
The end goal hasn’t changed since the birth of cyber scams: Thieves want your online passwords, PINs, and credentials to access information systems that contain confidential information.
They will sell stolen identities on the dark web, attempt fraudulent purchases and financial transactions, or hold company data and systems for ransom – whatever opportunity presents itself. AI simply gives criminals the ability to make their scams more convincing.
Common AI Deepfake Tactics Used by Cybercriminals
When a criminal masters a deepfake technology application like Deepfake Studio or DeepFaceLab to create videos using another face and voice, and Faceswap or DeepFaceLive to face-swap or head-swap during live conference calls, or natural language processing (NLP) models to make written messages sound legitimate, it increases the odds of a successful attack.
- Phishing and spear phishing. AI’s advanced NLP capabilities can create convincing and targeted email phishing attacks that appear to come from reliable sources. It helps criminals who do not speak English as a first language make their messages sound authentic, and even allows them to translate their threats into multiple languages, widening their net.
- Impersonation. In written communications, attackers can train AI tools to mimic a person’s unique writing style and mannerisms, making it appear as though a trusted person was issuing instructions or giving permission to execute an action, like a financial transfer.
In video, thieves can make themselves appear as someone else, perhaps an executive to trick employees into exposing company secrets on a conference call, or a celebrity to create fake endorsements to influence viewers to purchase products or visit a malicious website. This is the real threat of AI.
How to Recognize AI Deepfakes
Deepfakes are getting harder to spot, but they’re far from perfect. Here are some things to look for to determine if an image or message may be a deception:
- Visual distortion. Today’s deepfake video technologies are pretty good at straight-on views where the face is easily mapped but has difficulties maintaining image integrity when the subject turns their head. Look for temporary loss/blurriness of facial features or things like sections of beards or jewelry disappearing and reappearing.
- Interference. Halo effects and artifacts floating around the speaker’s head, or pixelated/illegible text on walls indicate a fake background. Is there a clock on the wall? Can you see what time it is? Is it moving?
- Transparency. If the subject places an object in front of them or puts a hand over their face, you may see a ghost image revealing the underlying swapped or masked face. Not sure if you believe what you see? Ask the speaker to touch their nose or take off their glasses (if wearing).
- Missing body parts and lip-synching issues. When the imposter speaks, some deepfake software apps are incapable of rendering a tongue or showing teeth accurately when the subject opens their mouth or moves their lips. Further, there may be audio synching inconsistencies where the speech does not match mouth movements, facial expressions or hand gestures. Questioning whether these effects are being caused by a poor network connection or deepfake software? Ask the speaker to stick out their tongue.
- Perfection. In written deepfake communications you won’t find any typos, inconsistencies, rambling thoughts, run-on sentences, or anything that indicates a human wrote it. If it’s too perfect, it’s probably a deepfake.
Protecting Your Business from AI Deepfake Attacks
AI is making it tougher to tell fact from fiction. It’s hard to trust your own eyes and ears today. What can you do to minimize your business’ exposure to threats from AI-powered criminals?
In part one of this article, The Dangers of Entering Data into AI Tools, we discussed how your employees may be using AI and that you should:
- Restrict the AI tools used by your employees to those you trust (i.e. not DeepSeek).
- Educate employees about the types of data that should never be entered into AI models: personal information, passwords and digital credentials.
- Store company intellectual property in password-protected systems and use a password manager like Passpack to control access to those resources.
To those we can now add:
- Train your employees on the “tells” of a deepfake.
- Use two-factor authentication to be sure communications are legitimate and the other party really is who they say they are.
- In the event of confusion or suspicion regarding an odd request or something that doesn’t look quite right, train employees to directly contact the person by another means to confirm authenticity of the message.
Remember, criminals’ goals haven’t changed, just their weaponization of technology to achieve them. They’re after the same thing: business credentials. Don’t make it easy for them. Use a password & credentials manager to put an extra layer of protection between you and the latest threats, no matter how deceptive.
Trust your instincts. Trust Passpack.
Passpack offers this advice to help protect you from deepfakes. We assure you we never use AI in our communications and will never ask for your passwords. If someone does, that’s a deepfake!
Passpack’s secure password and credentials manager can help protect businesses from the risks of employees using AI tools, deepfakes and other threats by controlling access to critical systems and applications. Try Passpack risk-free. Sign up for a no-obligation 28-day free trial of the Passpack Business Plan.