On Sunday, I offered up this quick post about what I’ve learned from facilitating data dialogue in different schools over the years. Starr Sackstein caught wind of it on Twitter and connected my reflections to an earlier conversation that she began on her blog about the testing dilemma. As I chatted with her in the comments below her post, this thought emerged:

“I tend to see trend data from standardized assessments like markers in a forest. They provide some indication of where we want to travel, but no clue about what we will find when we get there. We need to take that walk and pay careful attention to the lay of the land, the texture of the trees, the way the colors and the scent of the air changes, and how one thing might influence the other. There are no answers. It’s an experience that leads us to better ideas and hunches, that’s all.”

Ironically, this awareness has inspired the groups of teachers I work with to seek other data that provide us different–and often times–deeper evidence about the strengths and the needs of the kids we’re hoping to help.

When administrators require the inquiry teams that I facilitate to begin developing hunches by exploring New York State Assessment trend reports, we typically pull little more out of that entry level analysis than an awareness of performance indicators that students struggled with over time. When we had the ability to revisit items and passages, we could draw refine our hunches a bit further by considering how other factors may have influenced performance. And we could disaggregate the data by looking at specific populations of students for specific purposes, but again—we knew that there were far many more variables at play than one set of data could have hoped to surface.

And now, our data-wiseness should probably give us further pause: this year’s NYS English Language Arts assessments are newly aligned to the Common Core. A new scale will be applied to the results. Even if I were in the business of drawing hard and fast conclusions about the needs of students from state assessment data (which I’m not), I know that trying to do so with this year’s reports might be completely misguided. These data are not robust enough. And we can’t revisit the items or the passages anymore, either. They’re locked down. So our definitions of what it means to be data-informed need to become far more sophisticated. So do our processes.

And maybe this was always the case. Maybe the unintended consequence of having so much data about the test and open access to it  inspired a less than ideal perspective about what it meant to be data informed and less than ideal processes for using data to intervene in support of learners.

Thoughts?

Author

Write A Comment