← Back to Blog
Assessment7 min read

Formative Assessment Strategies That Actually Inform Your Teaching

Formative assessment is one of the most researched and most misused concepts in education. The research is clear: formative assessment — gathering information about student learning during instruction and using that information to adapt — produces some of the largest effect sizes of any instructional intervention. The practice in many classrooms is less clear: a quick show of hands, a thumbs-up/thumbs-down, or an exit ticket that gets reviewed the following week when the class has already moved on.

The gap between what formative assessment can do and what it usually does is mostly a function of speed and specificity. Data that comes too late or is too vague to act on isn't formative assessment — it's delayed summative assessment.

What Formative Assessment Actually Is

Formative assessment is not a specific technique. It's a process: eliciting evidence of student understanding, interpreting that evidence, and using it to make instructional decisions. It can happen with a single question or an extended performance task. What makes it formative is not the tool but the use — the information has to change something about what happens next.

This means the most important question to ask about any formative assessment strategy is not "is this a good way to check for understanding?" but "what will I do differently based on what I find out?"

If the answer is "the same thing I was going to do anyway," the assessment isn't formative — it's just data collection.

Low-Stakes Techniques That Generate Real Information

Exit tickets with a single diagnostic question: At the end of class, students answer one question that reveals whether they understood the core concept. The question should be specific enough that there are meaningfully different wrong answers — not "did you understand?" but "solve this problem and explain your reasoning" or "predict what would happen if we changed this variable and why."

Sort exit tickets into three piles: got it, almost, not yet. Plan the next day's instruction based on the distribution. If more than a third of students are in the "not yet" pile, the lesson wasn't what they needed. If nearly everyone got it, you can move faster than planned.

Strategic cold calling versus warm calling: Cold calling (calling on students who didn't raise their hands) produces useful data — it tells you what the students who aren't self-selecting know, which is usually more representative of the class than the three hands that go up every time. Warm calling (telling students in advance they'll be called on) reduces performance anxiety while maintaining broad participation. Both produce more representative data than accepting only volunteer answers.

Hinge questions: A hinge question is a single diagnostic question at a critical transition point in a lesson — usually when the whole-class instruction phase ends and practice begins. The answer reveals which of two or three paths the lesson should take next. A good hinge question has wrong answers that reveal specific misconceptions (not just "got it" or "didn't get it"), and the teacher can assess the whole class's answers quickly enough to act before moving on.

Create assessments in seconds, not hours

Generate quizzes, exit tickets, and formative assessments aligned to your standards. Multiple formats, instant results.

Try the Quiz Generator

Whiteboards and quick physical response: Students write their answer to a question on a whiteboard or notecard and hold it up simultaneously. The teacher sees every answer at once rather than one at a time. The simultaneous reveal matters — when students see others' answers, social calibration kicks in, and hesitant students who would have changed their answer to match a peer's are less likely to do so.

Observation during practice: Walking the room during independent or collaborative practice with a specific observation focus — not general monitoring, but attending to whether students are making a specific type of error — generates real-time data that can redirect the class before independent practice calcifies the mistake.

Making the Data Actionable

The most valuable formative assessment data is specific enough to act on immediately. Generic data ("some students struggled with today's content") doesn't tell you what to do. Specific data ("eight students wrote a correct procedure but got a wrong answer, which suggests they have the steps but are making a calculation error at step three") tells you exactly what needs to happen next.

Build the habit of interpreting formative data with this question: what is the most likely reason students responded the way they did? The answer usually leads directly to the instructional response.

LessonDraft helps teachers design lessons with embedded formative checkpoints — questions, tasks, and observation prompts placed at strategic moments in the lesson flow so assessment is woven into instruction rather than appended at the end.

Small Groups as a Formative Response

The most direct response to formative data is regrouping. When assessment reveals that some students are ready to move on and others aren't, splitting into temporary small groups allows targeted instruction without leaving the advanced students in holding patterns.

Temporary small groups formed from formative data are different from permanent ability groups. They form around a specific identified gap, disband when that gap is addressed, and reform differently in response to the next diagnostic cycle. Students move in and out of them based on what they demonstrate, not based on fixed labels.

This requires classroom culture where small group instruction doesn't feel like sorting — where being in a targeted group signals "this teacher has paid attention and knows what I need," not "I've been identified as the slow group." Build that culture through transparency: "Based on what I saw today, these five students are going to work with me on X while everyone else works on Y. Tomorrow the groups might be completely different."

Your Next Step

Design one hinge question for a lesson you're teaching this week — a single question at the critical transition point, with two or three wrong-answer options that reflect specific misconceptions, and a plan for what you'll do based on the results. Use it, sort the responses in sixty seconds, and redirect accordingly. That's formative assessment working the way it should.

Frequently Asked Questions

How is formative assessment different from summative assessment?
Summative assessment evaluates what students have learned after instruction is complete — unit tests, final projects, standardized tests. The data is useful for grading and reporting but rarely influences the instruction that produced the results, because that instruction is already done. Formative assessment evaluates what students are learning during instruction, with the explicit purpose of informing what happens next. The same instrument can function as either: a quiz administered at the end of a unit and recorded in the gradebook is summative; a quiz administered mid-unit with results used to reshape instruction before the unit continues is formative. The distinction is purpose and timing, not the instrument itself.
How do I use formative assessment data without overwhelming myself with data management?
Don't try to record everything — that defeats the purpose. Formative assessment data is meant to be used immediately, not archived. A sticky note with three columns (got it / almost / not yet) and tally marks from exit tickets tells you what you need to know without a spreadsheet. The observation notes you make during practice (two or three specific things you noticed while walking the room) guide tomorrow's lesson without requiring a data entry system. For patterns that matter over time — a student who consistently struggles at a specific concept, a class-wide misconception that keeps reappearing — brief notes in a class roster or gradebook capture what you need. The standard should be: is this data improving my instruction? If the management burden exceeds the instructional benefit, the system is too complex.
What if formative assessment reveals that most of the class didn't understand something I already taught?
Reteach it — but differently. The most common mistake after a failed lesson is to reteach the same way but slower or louder. If students didn't understand the first explanation, a clearer version of the same explanation will help some but not all. Instead, diagnose the specific source of confusion (what exactly didn't land?), then try a different approach: a different representation, a different example, a different sequence, a hands-on or visual version if the first was purely verbal. The formative data tells you that the lesson didn't work; you still have to figure out why in order to do something different. Talking briefly with the students who struggled — 'I noticed on the exit ticket that this part was confusing; what was the sticky part for you?' — often reveals the specific misconception faster than any other method.

Get weekly lesson planning tips + 3 free tools

Get actionable lesson planning tips every Tuesday. Unsubscribe anytime.

No spam. We respect your inbox.

Create assessments in seconds, not hours

Generate quizzes, exit tickets, and formative assessments aligned to your standards. Multiple formats, instant results.

No signup needed to try. Free account unlocks 15 generations/month.