← Back to Blog
Lesson Planning6 min read

Visible Learning in Practice: The Hattie Strategies That Actually Move the Needle

In 2009, John Hattie published Visible Learning, a synthesis of over 800 meta-analyses covering 80 million students. His core finding: not all instructional strategies are equal. Some have massive effects on student achievement. Others — including many things schools spend significant time and money on — barely move the needle.

Understanding the top strategies is not about following a formula. It's about knowing where to invest your limited time and energy.

The Effect Size Framework

Hattie used effect size (Cohen's d) to compare strategies. An effect size of 0.4 is roughly equivalent to one year of expected growth — Hattie called this the "hinge point." Strategies above 0.4 produce more than average growth. Below it, they're producing less.

The top performers are not exotic. They're things skilled teachers have always done, just now with empirical backing for why they work.

Strategy 1: Teacher Clarity (d = 0.75)

Students learn better when they know what they're learning and why. This sounds obvious, but most lessons communicate objectives poorly.

Teacher clarity means:

  • Sharing the learning intention ("Today we're learning to...") and success criteria ("You'll know you've got it when you can...") at the start of every lesson
  • Returning to those criteria during and after learning
  • Using language students understand, not bureaucratic standards-speak

The research finding is this: when students can articulate what they're learning and what success looks like, achievement goes up significantly. This isn't about posting an objective on the board and ignoring it. It's about making the target visible and using it throughout instruction.

Strategy 2: Feedback (d = 0.70)

Feedback is the most studied variable in education research. Hattie's meta-analysis found it one of the highest-effect practices — but only when done correctly.

High-effect feedback is:

  • Timely — the closer to the learning, the more useful
  • Specific — tells students exactly what to improve, not just "good job" or "try harder"
  • Forward-looking — points to the next step, not just the error
  • About the task, not the person — "your argument needs evidence" not "you're not trying"

Low-effect feedback is vague, late, and focused on the grade rather than the learning. Most written feedback teachers give falls into this category.

Strategy 3: Formative Assessment (d = 0.68)

Formative assessment means regularly checking for understanding during instruction, not just at the end of a unit. Exit tickets, cold calling, think-pair-share, quick writes — anything that gives the teacher real-time data about where students are.

The key move is using that data to adjust instruction. Teachers who monitor understanding but don't respond to what they find are collecting information without acting on it.

Stop spending Sundays on lesson plans

Join teachers who create complete, standards-aligned lesson plans in under 60 seconds. Free to start — no credit card required.

Try the Lesson Plan Generator

Hattie's finding aligns with Black and Wiliam's earlier work: formative assessment, used properly, is one of the most powerful tools in a teacher's arsenal.

Strategy 4: Direct Instruction (d = 0.60)

Direct instruction has been maligned in progressive education circles, but the research consistently shows it's highly effective — especially for introducing new content, building foundational skills, and teaching to mastery.

What Hattie means by direct instruction is not lecture. It's:

  • Stating clear learning goals
  • Presenting new content in small, sequenced chunks
  • Checking for understanding at each step
  • Providing guided practice before releasing students to independent work
  • Correcting errors immediately

This is the explicit, structured instruction model. It's most powerful when combined with opportunities for students to process and apply what they've learned.

Strategy 5: Spaced Practice (d = 0.65)

Massed practice (cramming all instruction on a topic into one block) produces faster short-term learning but poor retention. Distributed practice — returning to content over days and weeks — produces durable learning.

For teachers, this means:

  • Deliberately revisiting earlier content in warm-ups, reviews, and questions
  • Designing units so topics spiral back rather than disappear after the test
  • Building cumulative assessments that require students to retain, not just recall recent material

Spacing feels slower to students and sometimes to teachers. The content takes longer to feel "mastered" in the moment. But long-term retention is dramatically better.

What Doesn't Work As Well As We Think

Some popular practices have effect sizes well below the hinge point:

  • Learning styles instruction (increasingly debunked as a theory)
  • Homework at the elementary level (d = 0.29)
  • Ability grouping (d = 0.12)
  • Summer school remediation programs (d = 0.23)

This doesn't mean never do these things. Context matters. But it does mean we should be cautious about how much instructional time we invest in low-effect practices.

The Meta-Lesson

Hattie's core insight is that the teacher matters more than almost any structural variable — class size, technology, curriculum materials. A highly effective teacher using modest materials outperforms an average teacher using premium resources.

What makes a teacher highly effective, according to the data: being clear about learning goals, checking understanding constantly, giving specific feedback, and being willing to change instruction based on what students show you.

LessonDraft is built to support exactly this kind of teaching — helping you design lessons with clear objectives, built-in formative checks, and meaningful feedback structures.

The research is clear. The question is what you do with it.

Frequently Asked Questions

What is John Hattie's most important finding?
That the teacher is the most important school-based variable in student achievement, and that specific practices — particularly teacher clarity, feedback, and formative assessment — produce significantly higher learning outcomes than others.
Does Hattie's research mean direct instruction is better than discovery learning?
Hattie's data shows direct instruction has a higher average effect size than unstructured discovery learning, but both have a role. Structured inquiry with explicit instruction embedded tends to outperform either extreme alone.
How do I use effect sizes as a teacher?
Think of 0.4 as your bar. Strategies well above that threshold (feedback, clarity, formative assessment, direct instruction, spaced practice) should get priority time and energy. Strategies well below that bar are worth questioning before investing heavily.

Get weekly lesson planning tips + 3 free tools

Get actionable lesson planning tips every Tuesday. Unsubscribe anytime.

No spam. We respect your inbox.

Stop spending Sundays on lesson plans

Join teachers who create complete, standards-aligned lesson plans in under 60 seconds. Free to start — no credit card required.

No signup needed to try. Free account unlocks 15 generations/month.