Extension Week 1 — AI-Generated Media
This extension explores one of the fastest-changing areas of media literacy: AI-generated content. Students learn that artificial intelligence can now create realistic images, text, audio, and video — and that these creations are becoming harder to distinguish from human-made media. The goal isn't to scare students, but to extend their verification toolkit to include a new category of content they'll increasingly encounter.
Key Vocabulary
| Term | Definition |
|---|---|
| AI-generated content | Images, text, audio, or video created by a computer program rather than a human |
| Deepfake | A video or audio recording where AI has replaced one person's face or voice with someone else's |
| Hallucination (AI) | When an AI generates information that sounds confident but is factually incorrect — it doesn't know it made a mistake |
| Artifact (AI image) | A visual glitch in an AI-generated image, such as distorted hands, garbled text, or impossible geometry |
You know how someone can draw a picture that looks real? Now computers can do that too — they can make pictures, write text, and even create voices that seem real but aren't. This doesn't mean everything online is fake. It just means you need to use your detective skills: Where did this come from? What clues do I notice? Do I need to check this somewhere else?
Connection
This extension adds AI-generated content to students' existing verification practice. The key insight is the same as Weeks 9–11: "seeing it" is not enough — you need to verify it.
From Weeks 9-11: All your verification tools still apply — check the source, search for the image elsewhere, ask who benefits. AI-generated media is just a new category that your existing toolkit handles.
From Week 1: "All media is constructed." AI-generated media is constructed too — but by a computer program trained on human-made examples, rather than directly by a person.
Teacher Preparation
Prepare the following:
- 3–4 pairs of images: one real photo and one AI-generated image of a similar subject. Sources like "Which Face Is Real?" (whichfaceisreal.com) are specifically designed for this exercise. Keep images age-appropriate (faces, animals, landscapes).
- A brief explanation of what "AI-generated" means in simple terms (the lesson provides one below)
- Examples of AI-generated text (paste a prompt into a chatbot and show the output alongside a human-written version of the same topic)
- Optional: an example of AI-generated audio or video (a "deepfake" clip of a celebrity — keep it lighthearted, like a face-swapped movie scene)
Tech note: AI changes fast. The specific tools and examples you use may differ from what's described here. Focus on the concepts — the specific tools will evolve, but the thinking framework will not.
Visit whichfaceisreal.com and pick 3-4 image pairs — one real, one AI-generated. That's your core activity. No printing needed — just show them on a screen. If you want AI text, paste any topic into a free chatbot and compare the output to a human-written paragraph.
AI can feel like a scary or overwhelming topic. Keep the tone matter-of-fact and empowering: "This is a new kind of constructed media. Now that you understand construction (from Week 1), you're already equipped to think about it. You're just adding a new item to your toolkit."
Guided Session 1
What Is AI-Generated Content?
Learning Goal
Students can explain what AI-generated content is, name the main types (images, text, audio, video), and understand that it's a new form of media construction.
Activities
-
A New Kind of "Constructed" — Recall Week 1: "All media is constructed. Someone made it." Now add: "But what if the 'someone' is... a computer program? AI can now generate images that look like real photos, text that reads like a real person wrote it, and even voices that sound like real people. This is AI-generated content."
-
How It Works (Simple Version) — Explain: "AI learns by studying millions of real examples — real photos, real articles, real artwork created by real people. After studying all those examples, it learns to make new ones that look like they could be real — but they aren't photos of anything that actually exists. It's like a student who studies thousands of paintings and then creates a brand-new painting in the same style."
Important context: "Those millions of examples came from real artists, writers, photographers, and musicians. Most of them didn't know their work would be used to train an AI. This raises questions about fairness and ownership that society is still figuring out."
One more critical idea: "AI can be wrong while sounding completely sure of itself. Because it learned from patterns rather than from understanding, it can produce answers that sound confident and well-written but are factually incorrect. This is sometimes called a hallucination — the AI generates something that doesn't exist or isn't true, but it doesn't know it made a mistake. This is why you should always verify AI-generated information the same way you'd verify any other source."
-
The Four Types — Show examples of each type:
- AI Images: Show an AI-generated face or landscape. "This person doesn't exist. This place doesn't exist. A computer created them."
- AI Text: Show a paragraph written by a chatbot. "A person didn't write this. An AI did."
- AI Audio: Describe or play an AI-generated voice. "This voice isn't real."
- AI Video (Deepfakes): Describe or show a simple deepfake. "This person's face was put on someone else's body by AI."
-
Why It Matters — Connect to media literacy: "If AI can create fake photos, fake voices, and fake text that look and sound real, then 'seeing it with your own eyes' is no longer proof that something is true. You need MORE than your eyes. You need your verification skills."
Reflection Questions
- Before this lesson, could you tell the difference between a real photo and an AI-generated one?
- Does knowing that AI can create text change how you feel about things you read online?
- Why would someone use AI-generated content to deceive people? What would they gain?
Guided Session 2
Spotting AI — The Detection Toolkit
Learning Goal
Students can identify common signs of AI-generated images and apply their existing verification tools to AI content.
Activities
-
Real or AI? (Image Round) — Show 6–8 images, a mix of real and AI-generated. For each one, the student guesses: "Real or AI?" After each guess, reveal the answer and discuss: "What clues helped? What clues were misleading?"
-
Common AI Image Tells — AI images are getting better, but many still have detectable artifacts:
- Hands and fingers: Often the wrong number of fingers, or fingers that blend together
- Text in images: AI struggles with text — letters may be distorted or nonsensical
- Symmetry glitches: Earrings that don't match, glasses arms that disappear, teeth that look too uniform
- Background weirdness: Objects that melt into each other, impossible geometry, repeated patterns
- Skin and hair: Overly smooth skin, hair that merges with the background, inconsistent lighting
Look at AI images together and hunt for these tells.
Important caveat: These visual tells are becoming less reliable as AI technology improves. They are a useful bonus skill, but your most dependable tools are still the ones from Weeks 9–11: check the source, search for the image elsewhere, and ask who benefits from you believing it's real. Visual tells supplement your verification toolkit — they don't replace it.
-
The Verification Extension — AI content means we need to add new items to our verification toolkit:
- ✅ All the existing checks (source, date, lateral reading)
- ✅ Reverse image search — does this image appear anywhere else? If it only exists in one place, be suspicious
- ✅ Look for tells — check hands, text, backgrounds, symmetry
- ✅ Check the context — is there a real event, date, or location connected to this image?
- ✅ Ask: who benefits? — If this image is fake, who benefits from people believing it's real?
- ✅ Run The Media Checkpoint — all 7 questions apply to AI-generated content, especially question 5 (What's the evidence?) and question 1 (What am I looking at? — now includes "is this AI-generated?")
-
AI Text Detection — Show two paragraphs about the same topic: one written by a human, one by AI. Can the student tell which is which? Discuss: AI text often sounds "correct but empty" — it uses the right words but may lack personal experience, specific examples, or a genuine voice. (Acknowledge: this is getting harder to detect, which is why verification of claims matters more than detecting the author.)
Reflection Questions
- Were you able to spot the AI images? What was the hardest one?
- How does AI-generated content change the "rules" of media literacy?
- If AI keeps getting better, what verification skills will become MORE important, not less?
Independent Session
AI Detective
Instruction
Create an AI Detective Report. Find or receive 5 images (the adult can prepare a mix of real and AI-generated). For each one:
- Your guess: Real or AI-generated?
- Your evidence: What clues did you look for? (Hands, text, backgrounds, symmetry, etc.)
- Your confidence: How sure are you? (1 = total guess, 5 = very confident)
- Your verdict (after checking/being told the answer): Were you right?
After completing all 5, write a short reflection:
- How many did you get right?
- What was the best clue for detecting AI images?
- What would you tell a friend who doesn't know about AI-generated content?
Skills Reinforced
- Visual analysis and pattern detection
- Applying AI-specific verification techniques
- Self-assessment and calibrating confidence
Setup
Prepare 5 images in advance (3 AI, 2 real — or the reverse). Print them or display them one at a time. Provide a notebook or report template. Set a timer for 25 minutes.
Quick Check
After this week's sessions, the student should be able to:
- Name the types: List the four kinds of AI-generated content (images, text, audio, video).
- Spot the clues: When looking at an image, check for common AI tells (hands, text, backgrounds).
- Apply the verification extension: Explain why "Ask: who benefits?" is one of the most important questions for AI-generated content — alongside checking the source and looking for visual clues.
Caregiver Look-Fors
- The student understands that AI-generated content is a form of construction, not magic
- They check for visual tells without becoming paranoid about every photo
- They connect AI verification back to the existing toolkit (source, date, lateral reading)
- They understand that AI can be wrong while sounding confident (hallucination)
- They ask the "who benefits" question when encountering suspicious content
🎯 Takeaway
Big idea: AI can create media that looks and sounds convincing — but your verification habits are still your best tools.
Remember: Approach AI-generated media with curiosity and caution, not panic. Ask where it came from, what clues you notice, and whether you need confirmation from another source.
Younger Learner Adaptation (Ages 6–8)
- Focus on images only: Skip text, audio, and video for now. AI faces are the most concrete and engaging.
- Use "Which Face Is Real?" as a game: Make it a fun guessing game with a running score.
- Simplify the explanation: "A computer looked at millions of real photos and learned to make new ones that look real but aren't real."
- Skip deepfakes: The concept of face-swapped video can be unsettling for younger learners.
Older Learner Extension (Ages 11–13)
- AI ethics discussion: Who owns AI-generated art? Is it fair to train AI on artists' work without permission? What about AI-generated music?
- Try it themselves: If appropriate, let the student use an age-appropriate AI image generator and then analyze the output for artifacts.
- Misinformation scenario: How could someone use AI-generated content to spread disinformation? Design a defense strategy.
- Policy research: What are governments and tech companies doing about deepfakes and AI-generated misinformation?
Accessibility Options
- Verbal detective: Instead of writing the AI Detective Report, the student describes their observations aloud.
- Large-print comparison: Display real and AI images at large size on a screen for detailed examination.
- Sorting game: Print images on cards and physically sort into "Real" and "AI" piles.
- Confidence scale with props: Use a physical scale (1–5 with numbered cards) instead of writing confidence ratings.
- Partner analysis: Adult and student examine each image together, taking turns pointing out clues.