This post continues a conversation that I started last week about visible learning, documentation, and the use of Grounded Theory methodologies. My thinking and work has evolved over time, in response to the learning I’m fortunate enough to do at the WNY Young Writer’s Studio and inside of various western New York school districts.
Studio teachers use these approaches to fuel independent action research as they strive to uncover instructional practices that truly meet the needs of their students. Teachers inside of schools use these approaches to drive their data and inquiry team meetings, which are intended to produce promising intervention approaches. The stories unfolding in both arenas have us pondering the limitations imposed by guiding questions.
Here’s what we wonder:
1. Who or what influences the guiding questions framing our investigations, and who or what should be? How do we know?
2. Although guiding questions provide us a pathway and illuminate certain elements of it, is it possible that they dim our peripheral vision or prevent us from looking at essential elements of the entire setting head-on? How might this compromise the quality of our hunches?
3. What would happen if we took a step back and used a wider lens? For instance, if we began with a topic or concept rather than a question, how would that change our investigation and potentially, our findings?
4. How should our answers to these questions inform the way we document visible learning and organize and analyze the data we gather?
Currently, about half of the educators I’m working with are using guiding questions to frame their learning and work from the outset of their investigations while the other half are not. This has resulted in some very interesting findings.
Those who begin without questions typically go into their classrooms to investigate broad topics that varied measures point them toward. For instance, Michele studied student questioning. She refrained from narrowing that focus further and simply went into her classroom intent on capturing abundant data relevant to that topic. She looked for trends, and then, she coded and classified them. These findings eventually inspired guiding questions, but instead of allowing her assumptions and biases to shape them, Michele knew that her guiding questions were solidly grounded in real time classroom experiences.
Quantitative data can’t reveal the learning, work, and behavior of living, breathing students. It’s no wonder then that the questions provoked by Michelle’s uncommon approach were far different than the ones she would have framed independently, with only her grade book and standardized assessment information to guide her.
How misguided her interventions might have been.
What are your thoughts about all of this? Leave a comment here or tap my shoulder on Twitter.