← Back to Blog
Assessment5 min read

Assessment For Learning: Using Assessment to Drive Instruction, Not Just Grade It

Paul Black and Dylan Wiliam's 1998 meta-analysis of formative assessment research produced one of the most striking findings in educational research: consistent, well-implemented formative assessment raised student achievement by effect sizes of 0.4 to 0.7 — some of the largest gains ever documented for a classroom intervention. Two decades of subsequent research have largely replicated this finding. Nothing else in the instructional toolkit has as strong an evidence base.

And yet most assessment time in most classrooms is spent on summative assessment — assessment designed to record what students have learned, not to improve it. The reason isn't that teachers don't know about formative assessment. It's that formative assessment, done well, requires changing what you do with what you find.

The Key Distinction

The difference between formative and summative assessment isn't primarily about timing or format. It's about function. Assessment is formative when the information it produces is used to adapt instruction. Assessment is summative when it's used to record achievement for reporting purposes.

A quiz at the end of a week can be summative (graded, recorded, done) or formative (analyzed for patterns, used to plan next week's instruction). A test question can be formative if you examine which question most students missed and adjust your explanation accordingly. The label "formative" isn't in the assessment itself — it's in how the information is used.

This distinction matters because it changes what you do after assessment. Summative assessment ends with a grade. Formative assessment ends with an instructional decision: do I re-teach this? Do I accelerate? Which students need small-group work? Which misconception is most widespread?

The Five Formative Practices

Wiliam's framework identifies five key formative assessment strategies:

Sharing learning intentions and success criteria: Students who know what they're trying to learn and what success looks like can monitor their own progress. This isn't just posting objectives on the board — it's communicating what understanding looks like clearly enough that students can self-assess against it.

Eliciting evidence of learning: Active, ongoing gathering of information about student thinking — not waiting for a test, but continuously sampling understanding through questions, tasks, and written responses. Exit tickets, cold-calling with think time, mini-whiteboards, structured observation of student work.

Providing feedback that moves learning forward: Feedback that tells students where they are, where they're going, and how to get there — not a grade, not a "good job," but specific information about what the student understands and what to do to deepen it. This is the hardest part of formative assessment and the part with the highest payoff.

Activating students as learning resources for each other: Structured peer feedback, peer tutoring, collaborative problem-solving. Students who explain concepts to each other learn them more deeply. Students who receive feedback from a peer often receive it differently than from a teacher.

Create assessments in seconds, not hours

Generate quizzes, exit tickets, and formative assessments aligned to your standards. Multiple formats, instant results.

Try the Quiz Generator

Activating students as owners of their own learning: Self-assessment, goal-setting, learning logs. Students who monitor their own understanding develop the metacognitive capacity that drives autonomous learning.

Exit Tickets That Actually Inform Instruction

The most commonly used formative assessment tool is the exit ticket. The most common failure mode: exit tickets that aren't read, or that are read but don't inform the next day's instruction.

Exit tickets work when:

  • The prompt targets a specific learning goal, not general understanding
  • The responses are quickly sortable into categories (got it / almost / not yet)
  • The sort takes five minutes, not twenty
  • The categories directly inform the next instructional decision

A three-pile sort — strong understanding, developing understanding, significant misconception — produces an immediate instructional decision: the strong pile can move forward; the developing pile needs another approach; the misconception pile reveals a specific gap that needs targeted re-teaching.

LessonDraft can generate formative assessment sequences — exit tickets, check-for-understanding questions, peer assessment protocols, and success criteria — aligned to specific learning goals for any subject and grade level. Building a systematic formative assessment practice doesn't require designing every piece from scratch.

Feedback Design

The feedback literature is clear about what moves learning forward: specific, actionable information about the gap between current performance and the target, with guidance about how to close it. Not grades (which provide no information about what to do). Not general praise (which provides no information about the gap). Not extensive written comments that students read once and never use.

The most efficient feedback format is the two-item response: one specific strength (which tells the student what good work looks like in practice) and one specific growth edge with a concrete suggestion (which tells the student what to do differently). "Your claim is clear and arguable. Your evidence in paragraph 2 is relevant but you don't explain how it supports the claim — try adding a sentence that connects the evidence to your claim explicitly." That's actionable. "Good effort but needs more support" is not.

The research on feedback timing is equally clear: immediate feedback is far more effective than delayed feedback. Feedback received days after work was done has minimal impact on the next piece of work. Feedback received before the next attempt has a much larger effect. This argues for building feedback cycles into instruction — short practice, immediate feedback, revision, next attempt — rather than waiting for assignments to be graded and returned.

Making Formative Assessment Sustainable

Teachers who abandon formative assessment usually abandon it because it creates more work. The key is building efficient systems that produce information quickly enough to be useful without requiring hours of analysis.

Three principles for sustainable formative assessment:

  1. Not everything needs feedback. Identify which pieces of work will get teacher feedback and which will get self-assessment, peer assessment, or automated feedback. Reserve your time for the highest-leverage pieces.
  2. Sort, don't score. Categorizing student responses (not yet / approaching / got it) takes a fraction of the time that scoring takes and produces exactly the information you need for instructional decisions.
  3. Use class time for formative purposes. Cold-calling with think time, whiteboard responses, structured partner discussions — these generate real-time evidence about student thinking without requiring any take-home analysis.

The teachers who implement formative assessment most effectively are not spending more time on assessment overall. They're spending their assessment time differently — more on formative, less on summative — and producing better learning outcomes as a result.

Frequently Asked Questions

What's the most important formative assessment strategy to start with?
Exit tickets analyzed for patterns and used to plan the next day's instruction. This is accessible, takes minimal class time, and produces direct instructional information. The key is closing the loop: the exit ticket only becomes formative when the analysis of responses actually changes what you do next. If you read them and file them, they're not serving their function.
How do you provide useful feedback without spending hours on it?
Use the two-item format: one specific strength, one specific growth edge with a concrete suggestion. This takes 30-60 seconds per piece of work and produces more actionable information than extensive comments. Reserve elaborate feedback for the highest-stakes work; use efficient sorting and brief notes for practice and formative work.
Can standardized tests be used formatively?
Individual item analysis from standardized tests can inform instruction if the tests are aligned to specific learning progressions and results are available quickly enough to act on. In practice, most standardized tests return results too slowly (weeks or months after administration) to be truly formative. Classroom-designed assessments aligned to current instruction are almost always more formatively useful.

Get weekly lesson planning tips + 3 free tools

Get actionable lesson planning tips every Tuesday. Unsubscribe anytime.

No spam. We respect your inbox.

Create assessments in seconds, not hours

Generate quizzes, exit tickets, and formative assessments aligned to your standards. Multiple formats, instant results.

No signup needed to try. Free account unlocks 15 generations/month.