In my last post, I tossed out a loose taxonomy to name four different types of problems:

- Content Learning Problems
- Exploratory Problems
- Conceptual Understanding Problems
- Assessment Problems

I felt it necessary for myself. Up until now, I’d been labeling all problem equally: they’re problems! They’re tasks that are supposed to get students to learn stuff! But that implies a one-size-fits-all-ness that I don’t think is practical. The planning, time frame, facilitation, scaffolding, and – for our purposes in this post – assessment and wrap-up all look different, even if the task itself doesn’t look that different (after all, ideally we’re all using nonroutine problems with a low bar and a high ceiling regardless of whether it’s being used for formatively assessing student understanding or creating new knowledge).

It’s tough to throw out exact examples for assessment since we’re all working from different standards and tools. So I’m going to restrict it to the following universe of things to assess problems on: New Tech Network’s (where I work) most common **Schoolwide Learning Outcomes** (SWLOs) and the **Common Core Standards of Mathematical Practice**.

Now, different teachers and different schools I’ve worked with utilize these different halmarks differently. In fact, many schools have difficulty even defining many of these indicators of student learning, let alone assessing. But nevertheless, we’re trying to get a general look and feel to what a problem rubric would look like, depending on what you’re actually trying to accomplish from said problem. We’re talking broad-brush here.

**Content Learning Problems**

Things to assess: Oral Communication, Professionalism/Work Ethic, Make sense of problems and persevere in solving them, Look for and make use of structure, Look for and express regularity in repeated reasoning

This might just be personal preference, but I’d be wary of assessing content knowledge in a learning opportunity for a student. If we are distinguishing between learning and confirmation problems, we might want to more rigorously assess content on the latter. Another one of my favorite wrap-up activities is this quick check-up as an exit ticket.

**Exploratory Problems**

Things to assess: Critical Thinking, Oral Communication, Collaboration, Model with Mathematics, Construct viable arguments and critique the reasoning of others, Use appropriate tools strategically

Assuming that the time-frame is a bit longer for an exploratory problem, and that the solutions and solution routes are varying, the wrap-up could consist of a formal presentation, followed by panel-style questioning.

**Conceptual Understanding Problems**

Things to assess: Critical Thinking, Collaboration, Written Communication, Reason abstractly and quantitatively, Construct viable arguments and critique the reasoning of others, Look for and make use of structure, Look for and express regularity in repeated reasoning

Here, I think it makes sense to have students reflect on and communicate what they’ve learned.

**Assessment Problems**

Things to assess: Critical Thinking, Written Communication, Reason abstractly and quantitatively, Use appropriate tools strategically, Attend to precision

In this case, one can easily envision a rubric that assesses the items above. Assuming these tasks are a bit more individualized, a written piece – almost like the free response section of an AP exam – might make sense. I’ll leave it up to the reader’s discretion whether or not to allot numerical point values.

============================

With these self-recommendations in hand, we can more easily (hopefully!) pick and chose what would go in a rubric and where, if a rubric is one of the tools in your toolbox.

Again, the idea is to make things easier, not more complex. And to better target outcomes for each and every problem. From these recommendations we might be able to construct a loose, lean problem planning template that is directly tied to the indicators you’re trying to peg with a particular problem. Maybe even some planned facilitation and scaffolding moves as well.

Preface…

After three years as a math coach, I will be going back to the classroom next year so I am actively thinking about how PrBL will fit into my curriculum and assessment, so this is perfect timing.

I am not familiar with SWLOs but I am very familiar with the SMP. I definitely think it is worthwhile reflecting on what is assessable in a PrBL, but in my experience, many of the SMP can be included, and sometimes every single one of them.

For example, in this problem (http://robertkaplinsky.com/work/5k-race/) I can make a case for all the standards except #8. Clearly 4, 3, and 5 are important as you mention. If I had to pick 3 of them I would probably pick 1, 4, and 3 in that order.

I like where you are going with this though. It reminds me a little of CGI (Cognitively Guided Instruction) which until I learned about it, I had never considered what type of addition problem something was and clumped them all together.

I had been thinking about using some sort of rubric where the students evaluated themselves and then the teacher gave their score in a column next to it. I also agree that “I’d be wary of assessing content knowledge in a learning opportunity.” Could you get away with just assessing the SMP here and later content on an assessment problem?

I, too, like where this is going. I think PrBL is a bit like writing a song – sometimes your begin with the words (the problem) and sometimes with the music (the outcomes). The order in which this occurs might actually reflect where on the taxonomy scale the problem falls.

The use of “assessment” perhaps especially for maths teachers and students typically has a single inference – a TEST. Perhaps through judicious design of problems we can modify that model to what assessment truly should mean – a measuring of a particular skill or ability. This in turn would provide the student, and indeed the teacher, with an idea or plan for the areas that need further development. This is true of content and SWLOs. Therefore assessment of some kind would almost always be part of a problem response. I agree that this may be a peer-assessment…how did Jonny perform as a member of your team (for example).

This is probably even more important when we consider ‘exploratory’ problems. They mathematics in which students engage might be wide ranging and non-specific. But a group’s ability to effectively work together and create and present a solution will often be the difference between a positive learning experience or not.

So I vote for assessment!