Week 17: Test and Collect Data
Run the Experiment
Your protocol from Week 16 is a hypothesis: "If I follow this plan, my friction point will improve." This week we find out if the hypothesis holds up.
Good scientists don't just "try things and hope." They measure, record, and analyze. That's exactly what we'll do. The data will tell us what's working, what isn't, and what needs to change.
- The student has been running their protocol for about a week now. This session is a check-in, not an introduction.
- If the protocol is working, celebrate and discuss why. If it's struggling, that's equally valuable data.
- The message is: imperfect data about a real problem is infinitely more useful than a perfect plan that was never tested.
- Avoid the temptation to "fix" the protocol for the student. Ask questions that help them fix it themselves.
Week at a Glance
| Prep time | ~5 minutes |
| Materials | Protocol from Week 16, tracking data, graph paper or chart template, baseline data from Week 15 |
| Key vocabulary | data collection, baseline comparison, mid-experiment check |
| Difficulty | Moderate |
Facilitator Preparation
- Review the student's protocol from Week 16
- Check their tracking data from the past week
- Prepare graph paper or a simple chart template for data visualization
- Have the student's baseline data from Week 15 ready for comparison
- Be prepared for two scenarios: the protocol working well or struggling
The most important skill this week: honest self-reporting. If the student fudges the data to look good, the whole project loses value. Create a safe environment where "my protocol failed 4 out of 5 days" is celebrated as honest, useful data — not criticised as failure.
For Younger Learners (Ages 8–9)
Simplest version of the concept: "You made a plan last week. Now you're trying it out for real — and writing down what actually happens, honestly."
What to shorten or skip:
- Focus on simple, honest tracking: did I follow my protocol today? What happened?
- Skip formal data analysis and sample size discussion. Use: "Let's look at your check marks. How's it going?"
- Skip the "data vs. feelings" lecture. Keep it concrete: "Your brain might say 'it's going fine' even when the data says otherwise. That's why we write it down."
- Keep sessions to 20 minutes.
Adapting the activities:
- Use the index card from Week 16. Each day, the learner adds a check (✓) or X.
- Mid-week check-in: sit down together and count. "How many days did you follow the protocol? How many didn't? What happened on the X days?"
- If the data is messy or the protocol isn't working, that's fine — it's useful information: "Now we know what to change."
- The facilitator helps the learner be honest without feeling judged.
Journal alternative: "I followed my protocol ___ out of ___ days. What happened: ___. What surprised me: ___." Spoken is fine.
What success looks like: The learner tracked their protocol honestly for several days and can say whether it's working, not working, or unclear.
- Full data collection with a tracking sheet: date, trigger occurred (Y/N), protocol followed (Y/N), outcome rating (1–10), notes.
- Mid-experiment analysis: graph or chart the data. Look for patterns.
- Discuss data integrity: "Why is it important to record bad days honestly?"
- Begin thinking about what they'd change for Protocol v2.0.
Guided Session 1
The Data Collection Plan
Learning Goal
By the end of this session, the student can:
- explain why measurement is essential to improvement
- describe what makes "good" data (specific, honest, consistent)
- set up or refine their tracking system for the remaining test period
Activities
1. Why Data Beats Feelings
Start with this:
"How do you think your protocol is going so far? Not the data — just your feeling."
Write down their gut answer.
Now look at their actual tracking data from the past week.
"Do the numbers match your feeling?"
Often they don't! Common mismatches:
- "It feels like it's working" but the data shows only 2 out of 5 successful days
- "It feels like a failure" but the data shows 3 out of 5 days improved (better than baseline!)
"This is why we collect data instead of relying on feelings. Remember availability bias? We remember the dramatic moments (the one day it failed spectacularly) and forget the quiet successes. Data tells the real story."
2. Good Data vs. Bad Data
Review the student's tracking. Does it meet these criteria?
| Good Data | Bad Data |
|---|---|
| Specific numbers ("Left house at 7:18") | Vague feelings ("It went OK") |
| Recorded right when it happened | Written from memory days later |
| Consistent (same format every day) | Different each time |
| Honest (includes failures) | Only records successes |
| Includes context ("Had a bad morning") | Just numbers with no context |
If the tracking isn't detailed enough, help improve it now.
3. Refine the Tracking System
Based on what's working and what's not, set up (or fix) the tracking method:
For a simple friction point, a daily tally might be enough:
Day | Trigger Happened? | Followed Protocol? | Result (1-10) | Notes
Mon | Y | Y | 7 | Almost forgot but saw the card
Tue | Y | N | 3 | Rushed and skipped it
Wed | Y | Y | 8 | Worked great!
...
Discuss: "Is this system easy enough that you'll actually do it every day? If not, simplify it."
Guided Session 2
Mid-Experiment Check-In
Learning Goal
By the end of this session, the student can:
- analyze their tracking data to identify patterns
- distinguish between a protocol failure and bad luck
- propose specific improvements based on evidence
Activities
1. The Data Review
Spread out all the tracking data. Look for patterns:
- Success rate: Out of X trigger events, how many times did you follow the protocol? /
- Trend: Is it getting better, worse, or staying flat over time?
- Best day: When did the protocol work best? What was different about that day?
- Worst day: When did it fail most? What was different?
- Comparison to baseline: Before the protocol, the friction point happened ____. Now it happens ____.
Create a simple visual: a bar chart or line graph of daily scores.
2. Diagnose the Results
Use the 2×2 grid from Week 2 (Process vs. Outcome):
| Good Outcome | Bad Outcome | |
|---|---|---|
| Followed Protocol | 🌟 The system works! | 🤔 Bad luck? Or protocol needs adjusting? |
| Didn't Follow Protocol | 🎰 Got lucky — doesn't count | ❌ Expected — the protocol can't help if you don't use it |
For each day of data, which box does it fall into?
"If the protocol worked when you followed it but you only followed it 2 out of 5 days, the problem isn't the protocol — it's the follow-through. That's a different problem to solve."
"If you followed the protocol every day but it still didn't help, the protocol itself needs changing. That's Week 18's job."
3. The Update Question
One of the most powerful thinking habits is updating your confidence based on real evidence. Before this experiment, you had a prediction. Now you have data — so your confidence should shift:
"Before this experiment, you predicted your protocol would work with ___% confidence. Now that you have data, what's your updated confidence?"
This is the same calibration skill you practiced in Week 3 — checking whether your confidence levels match reality. Put your Probability Glasses on: what does the data say your actual success rate is?
- If confidence went UP: great! What specifically is working?
- If confidence went DOWN: also great — that's useful information! What specifically isn't working?
- If it stayed the same: do you need more data, or is the protocol too weak to make a difference?
4. Quick Fixes
Based on the data, are there any small improvements that can be made RIGHT NOW?
- "I keep forgetting the trigger" → Add a reminder (alarm, sticky note, parent prompt)
- "The default action is too hard" → Simplify it
- "It works on weekdays but not weekends" → Add a weekend version
- "It works when I'm alone but not with friends around" → Add a social edge case
Make any quick fixes and continue testing for the remaining days.
Independent Practice
Goal
Continue running the protocol with improved tracking, and prepare data for the final presentation.
Activities
1. Continue the Experiment
Run the (possibly updated) protocol for the remainder of the week. Track data using the refined system.
Minimum viable version (younger learners): Keep tracking your protocol on the index card for a few more days. At mid-week, count your checks and X's with a grown-up. Answer: "Is my plan working? What's the hardest part?" If you want to change something about the plan, that's great — write the new version on a fresh card.
2. Start the Summary
Begin drafting the experiment summary. You'll present this in Week 18:
- The Problem: What was your friction point? How bad was it at baseline?
- The Protocol: What was your plan? (Trigger, default, check)
- The Data: What happened when you tested it?
- What Worked / What Didn't: Patterns you noticed
- Next Steps: What changes would you make for v2.0?
Decision Journal
Mid-experiment reflection: What has surprised you about running this protocol? Has the data matched your predictions? What's the most important thing you've learned from the experiment so far — not about the friction point, but about how you make decisions?
Reflection Questions
- Is there a difference between a protocol that "failed" and a protocol that taught you something?
- What would happen if you skipped the data collection and just relied on your gut feeling about how it's going?
- Professional scientists run experiments for months or years. Do you think 1-2 weeks is enough to really know if your protocol works?
Quick Mastery Check
After this week, check whether the learner can:
- Report honestly: "How many days did you follow your protocol? How many days didn't you?" (Looking for: honest numbers, not vague claims like 'pretty well.')
- Identify a pattern: "What happened on the days you followed the protocol vs. the days you didn't?" (Looking for: any observation about the difference — even "I'm not sure yet" is fine if they explain why.)
- Stay curious, not defeated: "If your protocol didn't work perfectly, what might you change?" (Looking for: a specific tweak — not giving up, and not pretending it worked when it didn't.)
If the learner has honest data and at least one idea for improvement, they're ready for the iteration and presentation in Week 18.
Pause and Notice
After reviewing the mid-experiment data, ask:
"Was it hard to be honest about the days your protocol didn't work? What made it hard?"
"There's a pull to make ourselves look good — to round up, to say 'close enough,' to skip recording the bad days. That pull is natural. But the whole point of data is to tell the truth that your feelings might not. Honest data is a gift to your future self."
This week's takeaway: Being honest with your data — even when it's not flattering — is one of the bravest things a thinker can do. The data isn't judging you. It's just information.
Spiral Review
- From Week 10: "Your tracking data is signal. Your 'gut feeling' about how it went is potentially noise. This is exactly why you write it down."
- From Week 4: "Watch out for hindsight bias. If your protocol worked, don't say 'I knew it would.' If it failed, don't say 'I knew it wouldn't work.' Check what you actually predicted."
- From Week 3: "How well-calibrated was your confidence? Before testing, did you predict how likely the protocol was to work? Compare that prediction to the data."
- From Week 9: "You can calculate a rough success rate from your data. If you followed the protocol 4 out of 5 days, your compliance rate is 80%. Is that good enough, or does it need to be higher to solve the friction point?"