Deepfake detection drills¶
Audience: Survivors, support workers, social workers, advocates, and concerned allies
Duration: 90 minutes (flexible depending on discussion time)
Tone: Informative, engaging, empowering — no fearmongering, no jargon
Materials needed:
Projector or large screen
Laptops/tablets (optional, for hands-on work)
Printed checklists
Example videos (deepfake and real)
Flipchart or whiteboard for discussion
Workshop goals:
By the end of the session, participants will:
Understand what deepfakes are and how they can be misused in abuse contexts
Learn how to spot common warning signs in fake videos and audio
Practise identifying real vs fake content through guided drills
Know what to do if a deepfake is used to harass, threaten, or discredit someone
Leave with a checklist and confidence to question suspicious media
Session outline¶
1. What on earth is a deepfake? (15 mins)¶
Facilitator brief: Explain the concept plainly, using everyday terms.
Key points to cover:
A deepfake is a video, photo or audio clip that uses artificial intelligence to make someone appear to say or do something they never actually did.
They’re made by feeding the AI lots of images, voices, or videos of someone, and then editing new material to look and sound just like them.
Not all deepfakes are dangerous — some are silly or artistic. But in abusive relationships or stalking situations, they can be used maliciously.
Survivors may face fake “revenge porn,” false confessions, or doctored messages that are meant to discredit or intimidate.
Optional analogy:
“It’s like someone taking a puppet, making it look like you, and then putting words in its mouth. Only this puppet can be posted online and passed off as the real thing.”
2. Common signs of deepfakes (15 mins)¶
Activity: Group brainstorm Ask: “What might look or sound wrong in a fake video?” Capture answers on flipchart. Then provide the checklist.
Deepfake warning signs checklist:
Mismatched lip-syncing – The lips don’t quite match the words.
Blinking – Either too much or not at all. Some deepfakes still get this wrong.
Lighting inconsistencies – The lighting on the face doesn’t match the background.
Strange eye movements – Staring, twitching, or looking dead behind the eyes.
Wobbly or melted ears/jewellery – AI struggles with earrings, hairlines, or background items.
Flat voice – Audio deepfakes often lack normal speech patterns or emotion.
Repetitive gestures – The same eyebrow lift or blink over and over.
Print and distribute: Small laminated cards with the checklist to keep.
3. Drill 1 – Spot the fake (20 mins)¶
Setup: Show 5 short video clips (30–60 seconds each), a mix of real and faked. Ask participants to guess: “Real or deepfake?” Take a vote after each one, then reveal the answer and explain.
Discussion prompts:
“What gave it away?”
“If you saw that clip online, would you have believed it?”
“Could this be dangerous in the wrong hands?”
Facilitator note: Include one obviously fake and one very subtle one to illustrate the range.
4. Drill 2 – What if this was used against you? (20 mins)¶
Scenario roleplay exercise: Break into small groups. Hand each group a different scenario card. Example scenarios:
A fake voice message claiming you threatened someone
A video showing “you” shoplifting
An explicit image where your face has been pasted onto someone else’s body
A manipulated Zoom call recording
Task:
Discuss: How would this affect you if real?
Then ask: How can we tell it’s not?
Finally: What steps would you take? Who would you speak to?
Groups then briefly share their responses.
5. What to do if you’re targeted (10 mins)¶
Practical action list:
Don’t panic — many people and platforms now know deepfakes exist
Save a copy (don’t just report and delete)
Document: date, time, where it appeared, and who might’ve seen it
Report to the platform — most social media sites now ban synthetic abusive media
Seek legal advice — it may count as harassment or defamation
Speak to a digital safety support worker — especially before confronting the person responsible
Note: Mention the emerging protections for synthetic media.
6. Q&A and myth-busting (10 mins)¶
Invite any remaining questions or confusions. Suggested myths to bust:
“AI can perfectly fake anything now” – It’s improving, but still flawed
“You need to be famous to be deepfaked” – Anyone with enough photos online could be a target
“If it looks real, it must be real” – That’s exactly what they want you to believe
Optional add-ons¶
Tech demo: If appropriate and safe, show how easy it is to generate a basic voice clone or face-swap using public tools — then explain how this is being misused.
Printable takeaway sheet: “If you think you’ve seen a deepfake…” with contact points, checklist, and support services.
Follow-up session: “How to report synthetic media safely,” with platform-specific guides.