Module 06 // Outcomes & Measurement
Available

Measuring Whether Interventions Actually Worked

Good intentions aren't outcomes. Here's how to tell the difference.

// module overview
You referred a student to tutoring. Their GPA went up. Did the tutoring cause the improvement — or would their grades have recovered anyway? This is one of the most important and underappreciated questions in school data analysis, and most schools never ask it.

This module introduces basic before/after delta analysis for measuring intervention outcomes. We'll use the interventions table — which contains the type, duration, referral source, and measured outcomes for 242 interventions — to build a real evaluation. We'll also learn about the main pitfalls: seasonal patterns, small sample sizes, and the regression to the mean problem.
// key insight
Measuring whether interventions work isn't about proving you did a good job — it's about knowing what to do more of, and what to stop wasting time on.
// what you'll learn
🏫
What Educators Will Learn
  • Why 'the student improved after the intervention' is not the same as 'the intervention worked'
  • What a comparison group is and why you need one to measure effectiveness
  • How to read a before/after delta report: what numbers actually tell you something
  • Common confounders in school data: semester transitions, test windows, seasonal illness
  • How to communicate intervention results honestly to staff, families, and school boards
🐍
Python Walkthrough
  • Joining the interventions table with attendance and grades to build a before/after window
  • Calculating GPA and attendance deltas: 4 weeks pre-intervention vs. 4 weeks post
  • Building a comparison group: similar students who were not referred
  • Visualizing outcomes: intervention group vs. comparison group over time
  • Running a basic t-test with scipy.stats — and discussing why p-values aren't the whole story