In my last post, I tossed out a loose taxonomy to name four different types of problems:
- Content Learning Problems
- Exploratory Problems
- Conceptual Understanding Problems
- Assessment Problems
I felt it necessary for myself. Up until now, I’d been labeling all problem equally: they’re problems! They’re tasks that are supposed to get students to learn stuff! But that implies a one-size-fits-all-ness that I don’t think is practical. The planning, time frame, facilitation, scaffolding, and – for our purposes in this post – assessment and wrap-up all look different, even if the task itself doesn’t look that different (after all, ideally we’re all using nonroutine problems with a low bar and a high ceiling regardless of whether it’s being used for formatively assessing student understanding or creating new knowledge).
It’s tough to throw out exact examples for assessment since we’re all working from different standards and tools. So I’m going to restrict it to the following universe of things to assess problems on: New Tech Network’s (where I work) most common Schoolwide Learning Outcomes (SWLOs) and the Common Core Standards of Mathematical Practice.
Now, different teachers and different schools I’ve worked with utilize these different halmarks differently. In fact, many schools have difficulty even defining many of these indicators of student learning, let alone assessing. But nevertheless, we’re trying to get a general look and feel to what a problem rubric would look like, depending on what you’re actually trying to accomplish from said problem. We’re talking broad-brush here.
Content Learning Problems
Things to assess: Oral Communication, Professionalism/Work Ethic, Make sense of problems and persevere in solving them, Look for and make use of structure, Look for and express regularity in repeated reasoning
This might just be personal preference, but I’d be wary of assessing content knowledge in a learning opportunity for a student. If we are distinguishing between learning and confirmation problems, we might want to more rigorously assess content on the latter. Another one of my favorite wrap-up activities is this quick check-up as an exit ticket.
Exploratory Problems
Things to assess: Critical Thinking, Oral Communication, Collaboration, Model with Mathematics, Construct viable arguments and critique the reasoning of others, Use appropriate tools strategically
Assuming that the time-frame is a bit longer for an exploratory problem, and that the solutions and solution routes are varying, the wrap-up could consist of a formal presentation, followed by panel-style questioning.
Conceptual Understanding Problems
Things to assess: Critical Thinking, Collaboration, Written Communication, Reason abstractly and quantitatively, Construct viable arguments and critique the reasoning of others, Look for and make use of structure, Look for and express regularity in repeated reasoning
Here, I think it makes sense to have students reflect on and communicate what they’ve learned.
Assessment Problems
Things to assess: Critical Thinking, Written Communication, Reason abstractly and quantitatively, Use appropriate tools strategically, Attend to precision
In this case, one can easily envision a rubric that assesses the items above. Assuming these tasks are a bit more individualized, a written piece – almost like the free response section of an AP exam – might make sense. I’ll leave it up to the reader’s discretion whether or not to allot numerical point values.
============================
With these self-recommendations in hand, we can more easily (hopefully!) pick and chose what would go in a rubric and where, if a rubric is one of the tools in your toolbox.
Again, the idea is to make things easier, not more complex. And to better target outcomes for each and every problem. From these recommendations we might be able to construct a loose, lean problem planning template that is directly tied to the indicators you’re trying to peg with a particular problem. Maybe even some planned facilitation and scaffolding moves as well.
