I’m not sure if this is possible, but I’m starting to think it could be. I’ve been incredibly inspired by quite a few teachers who have been playing around with some very uncommon assessment approaches based upon the professional learning we’re doing together.
Take Michele, for example. She’s a teacher in the Kenmore-Town of Tonawanda school district, and she provides reading support to special education students. I’ve written about her before, and I remain compelled by the way she works with data.
She spent part of her spring break reflecting on her learning at the WNY Young Writer’s Studio, and I think I learned more than she might have in the process.
These are some of the questions that have been nudging at one or both of us all year:
- Do we have to test in order to learn what we need to about students’ strengths and needs?
- What do numbers really tell us about learners and learning and instruction that inspires both?
- What are we really interested in measuring anyway, and which approaches best enable this?
- Why do some educators place more value on quantitative data than qualitative, and how can we inspire change here?
- What are the limitations of qualitative data? How can we attend to them?
- What are we learning that we didn’t expect to?
Michele wasn’t satisfied by the information that multiple measures of quantitative data provided about her students. She was really eager to know more about their metacognitive processes, especially those relevant to the questions they asked of the texts they were reading and those they posed to their classmates during class discussions.
Over and over again, Michele seemed to be asking her students, ‘”How do you know what you know?” and “Why did you do things that way?” and “What helped you most?”
She asked a lot of other questions too, but these are the ones that showed up most often in our debriefs, and they prompted some powerful shifts in her practice and in the data she chose to collect.
Michele devoted herself to documenting learning made visible, and her cell phone was her greatest friend. She captured photos of kids at work, evidence of their thinking and learning, and work samples as well. When she printed them, we learned something unexpected and very worthwhile: the photos printed small enough to make display creation almost as tidy as what spreadsheets enable, but the pictures were large enough to allow a very clear reading of what students wrote.
Here’s an example of some of her most recent data. Her kids were stoked to be learning about cats and lions, and she was stoked to be learning about how they were able to question, compare, and contrast. In this display, she was able to capture evidence of their thinking as they read, made meaning from their reading, wrote, and reflected. These data were captured over time, and she was able to fit all of this evidence on our table at the same time.
It may be difficult to tell, but these samples are actually very small. This allowed us to look at multiple pieces at the same time, but more importantly, it enabled us to cluster different pieces according to what we saw. Then, we could mix and remix these data, bumping one photo up against another and noticing things we didn’t expect to.
In short, capturing these data provided Michele a far more satisfying view of what her students learned, what they knew, what they were capable of doing….and why. Far more rewarding than any information a spreadsheet could provide, these data took little time to gather, our work with them was efficient, and most importantly: Michele did not have to stop to test in order to learn what she needed to.
I’ll share examples from my work with other teachers in the coming days and weeks, but in the mean time, take a peek at what Terry Heick posted recently. What incredible food for thought for anyone who cares about improving our current state of assessment and data practices in education.
Now more than ever, I’m so grateful for those who propose great solutions to problems that many only complain about.