← Back to Blog

If You Can Spot Fake News, You Can Spot Fake Video: A Practical Detection Playbook

deepfakes misinformation security media-literacy
If You Can Spot Fake News, You Can Spot Fake Video: A Practical Detection Playbook

There is a comforting narrative that deepfakes are a new, unprecedented threat that requires brand-new skills nobody has. That narrative is mostly wrong. The mental moves you already use to sanity-check a suspicious headline — where is this from, who benefits, does it match everything else I know — are the same moves that catch most fake video in 2026.

They are also, uncomfortably, the same moves that will stop working when synthetic media gets one more step better. Both of these things are true. This post walks through the heuristics that still work, and is honest about when they won’t.

Start with the question you already know how to ask

When someone forwards you a suspicious news story, you almost certainly do some version of this without thinking about it:

  • Where did it originate? Is that source known to you?
  • Is anyone else I trust reporting it? If not, why not?
  • Does it confirm something I already wanted to believe?
  • Is someone asking me to do something specific — donate, click, share, panic?

The same questions apply to video. The video is a delivery mechanism for a claim. Interrogate the claim, then interrogate the delivery.

Layer 1: provenance

Provenance is the cheapest and most useful filter. The question is simply: where did this file come from?

  • Original source. The video was posted by the person who appears in it, on their verified account, within a plausible timeframe. Strong.
  • Secondary reliable source. A news organisation with a real editorial chain attached the video to a story and named the source. Moderate.
  • Forwarded from a group chat by your cousin. Unknown origin. This is the bulk of malicious video in 2026. Weak to nothing.

If provenance is unknown, everything else is less convincing than people feel it is. A video that looks real and sounds real tells you nothing about whether the event depicted happened.

Layer 2: corroboration

Real events leave multiple footprints. A public figure saying something important will usually be covered by multiple outlets, commented on by peers, and discussed in secondary channels within hours.

  • Reverse-search a frame (Google Images, Yandex, TinEye). Does the footage appear in any prior context — a different event, a different date, a different person?
  • Search the purported quote or claim as text. Is it reported anywhere that is not downstream of this specific video?
  • Check official channels. If a minister allegedly said something, their press office’s absence of comment is informative.

Corroboration is slow and doesn’t always resolve cleanly, but it is the most durable heuristic on this list because it doesn’t depend on fooling your eyes.

Layer 3: audio-visual tells (deprecated but still useful in 2026)

These are the tells the early deepfake detection guides leaned on. They work less and less as models improve. In 2026 they catch amateurs, not the state of the art.

  • Mouth shape doesn’t quite match sound, especially on plosives (p, b, m).
  • Hair edges flicker or dissolve at the temples.
  • Rigid, repetitive head motion or unnaturally still torso.
  • Earrings, glasses, or earbuds that appear and disappear between frames.
  • Breathing patterns that don’t match speech cadence.
  • Eye gaze that is subtly dead or over-exaggerated.
  • Audio with no room tone, or with plosives that hit oddly.

Use these, but do not rely on them. A convincing 2026 deepfake will pass most of them. A 2027 one will pass all of them.

Layer 4: behavioural context

This is the layer that doesn’t get worse as models improve, because it’s about the world, not the pixels.

  • Would this person plausibly say this, on this platform, on this day? Public figures are self-consistent. A sudden dramatic break from everything else they have said is suspicious.
  • Does the ask match the person? Politicians don’t typically call your accountant asking for wire transfers. CEOs don’t typically DM subordinates requesting iTunes gift cards.
  • Is urgency being manufactured? “You must act in the next 20 minutes” is almost never real in legitimate communication.
  • Is verification being discouraged? “Don’t tell anyone else yet” is a sign you should tell someone else.

The behavioural layer is where most successful deepfake fraud actually fails. The video is convincing; the scenario is not.

A one-minute detection routine

When you have sixty seconds:

  1. Where did this come from? Original poster, reliable secondary, or unknown? (20s)
  2. Is anyone else reporting it? Quick search of the claim, not the video. (20s)
  3. Does the behaviour match the person and platform? (10s)
  4. Is someone pushing me toward a specific action? (10s)

If three of four answers are “no” or “unclear,” treat the video as unverified regardless of how real it looks. The point of the routine is to decouple your belief from the visual quality of the file, because visual quality is no longer evidence.

Where the playbook runs out

Three trends are narrowing the window where perceptual heuristics help.

  • Model quality. Audio-visual tells that were reliable in 2023 fail by 2026. In 2028 they will fail on mid-budget productions.
  • Real-time generation. Live-call deepfakes already exist and are improving. “Get on a video call to verify” stopped being a clean control about eighteen months ago.
  • Voice quality. Voice cloning is effectively solved for consumer-grade fraud. If you are using voice alone to verify anyone, the verification is theatre.

When the surface cannot be trusted, you have two options. You can invest in bureaucratic controls — callbacks on pre-recorded numbers, code words, multi-person approvals for anything that matters — which we recommended in our post on modern identity stacks. Or you can shift to provenance-based trust — only believe media that carries a verifiable signature from a source you already trust.

Both are good answers. Neither is “look harder at the pixels.”

The durable fix

The durable fix is that media starts carrying verifiable provenance at the source. C2PA and Content Credentials — signed, tamper-evident metadata attached at capture or publication — are the current serious attempt. A video with a valid C2PA manifest linking it to a known publisher is evidence in a way that an unsigned video in a group chat is not.

That transition is happening, but slowly. We’ll write about the brand and media-organisation side of it in the next post. For now, the honest guidance is: train the old muscles, apply them more often than you think you need to, and start shifting your trust model toward provenance before the perceptual heuristics quietly stop working on you.