datawithsoul

Yesterday, I opened a conversation about the roles that assessment and intervention play in attending to the needs of struggling and reluctant readers. Would you like to know the most important thing I’ve learned over the years?

That it’s important for me to put what I believe and what I’m passionate about aside in service to others.

When it comes to assessing the needs of readers,  the data we’re looking at don’t provide answers either. Hunches emerge from what we see. Our passion and our expertise narrow our perspectives.

This is why I promote assessment and inquiry over particular practices or pedagogy. I’m grateful to everyone who publishes and promotes what they know–particularly when it’s informed by research and I can trace their sources. This sort of expertise contributes to the remedies we create and test. But the teachers I support need good information about who their readers are, where their strengths lie, and what their needs might be before they begin lifting and dropping potential prescriptions on them. More than numbers, they need a story that’s informed by evidence.

This leads to a bigger and more important question: how do we get this information?

All of the schools that I work in are pursuing that answer right now.

Race to the Top put quantitative measures at the forefront of this conversation.

Inquiry teams are quickly realizing that the data that emerge from these assessments don’t tell us enough, though.

Both are complex animals. These are the questions that keep presenting themselves:

  • What do we need to know about the readers we serve, and which assessments provide the best information about that?
  • What are the strengths of the assessments we are using, and what are their limitations?
  • How are big data best used, and what other measures can inform them best?
  • How do we become increasingly confident in the assessments we are using, the hunches we’re forming from them, and the interventions we’re testing?
  • What systems are in place to ensure sustained improvement?

Big data  aren’t enough.

And our interim assessments aren’t good enough. We know this. We also know that our assessment systems must expand as our need for better or more specific types of information do.

We’ve learned that  talking with readers can produce powerful data. An example: when we asked readers to make meaning from brief but incredibly complex passages, we found that they can do it. Not only that, but during these experiences, struggling and reluctant readers engaged sooner and contributed to class discussions more frequently. When I asked some of  them why this was the case, they suggested that using really hard passages levels the playing field.

“It’s hard for everyone, not just me.” One student admitted last week. “We’re all struggling and working it out together. It’s hard, but it’s fun too.”

That’s a powerful story.

It also flies in the face of everything that I think I know about engaging and motivating struggling readers.

But it’s not about me.

It’s about them.

Author

Write A Comment