post featured image

Being able to spot and report deepfakes is part of being able to protect your personal financial information online.

AI-Generated Videos and Deepfakes: How to Spot What’s Real

Share this Story

In the age of artificial intelligence, seeing is no longer believing. AI-generated videos (often called deepfakes) are becoming more realistic, more accessible, and more dangerous. From celebrity impersonations to political misinformation, deepfakes are reshaping how we perceive truth online and can also be a threat to keeping your personal information secure.

But how can we tell what’s real and what’s fake?

 

What Are Deepfakes?

Deepfakes are videos created using AI that manipulate faces, voices, and movements to make it appear as though someone said or did something they never did. These videos often mimic real people with uncanny accuracy.

They are often used for:

  • Entertainment (e.g., parody videos)
  • Fraud (e.g., impersonating CEOs in video calls)
  • Misinformation (e.g., fake political speeches)
  • Identity theft or harassment

 

How to Spot a Deepfake

While deepfakes are getting harder to detect, there are still signs that can help you identify them:

  1. Unnatural Eye Movement or Blinking
    Early deepfakes often failed to replicate natural blinking patterns. While newer models are better, you might still notice robotic or inconsistent eye movement.
  2. Odd Facial Expressions or Skin Texture
    Look for strange lighting, blurry patches, or skin that looks too smooth or rubbery. Deepfakes sometimes struggle with realistic facial muscle movement.
  3. Lip Sync Issues
    Watch the mouth closely. If the lip movements don’t match the audio perfectly, it could be a sign of manipulation.
  4. Audio Quality
    Deepfake videos may use synthetic voices that sound slightly robotic or lack natural intonation. Background noise may also feel off or disconnected from the scene.
  5. Head and Body Movement
    AI often focuses on the face, but struggles with syncing full-body movement. Watch for unnatural head turns or stiff posture.
  6. Context Clues
    Ask yourself: Does the person in the video usually speak this way? Is the source trustworthy? If it’s a shocking or controversial clip, verify it through reputable news outlets.

 

AI Voice Scams

Criminals have also developed ways of weaponizing this technology by impersonating voices over the phone. Typically, AI voice scams are centered around a perpetrator using software programs to impersonate someone’s voice, often intending to steal money or personal information. Criminals may use the recordings to trick targets into thinking that someone they care about is in an urgent or dangerous situation and needs money fast. Alternatively, scammers may attempt to contact someone while pretending to be a person who can be trusted with sensitive information, such as a banking representative.

 

Why It Matters

Deepfakes aren’t just a tech curiosity; they’re a growing cybersecurity threat. They can be used to:

  • Trick people into transferring money
  • Spread false information
  • Damage reputations or incite conflict

Being able to spot and report deepfakes is part of being able to protect your personal financial information online.

 

Final Tips

  • Verify before sharing: Always check the source of a video before reposting.
  • Stay informed: Follow cybersecurity news and updates.
  • Educate others: Share what you learn with friends and family.

 

Being in the know is the first step to protecting yourself from cyber fraud. Choice Bank is committed to providing you with up-to-date resources and tips to help you stay informed. Learn more at bankwithchoice.com/cybersecurity.

Photo Credit: VAKSMANV