Why would we design all problems and facilitation in a similar way without having the type of problem identified?

It’s possible I’ve been a bit too broad-brush when describing Problem Based Learning (PrBL) in terms of task design and facilitation. I’m beginning to wonder if we need a **taxonomy of problems**. After all, every problem implemented in a classroom will have different intended outcomes which can affect the design and facilitation of the task. After a while, you start to notice the similarities and patterns that like problems attend to. You also notices the differences. What I have not encountered, thus far, is a way of classifying the problems, which I think would make it easier to design, facilitate and assess. Once you have the problem type identified, that may allow you to use different tools, templates, rubrics, etc. around which the task may be designed.

Maybe it would be better if I just got to it. Consider this an attempt at **Problem Taxonomy**.

The four types of problems:

**Problems for Learning/Constructing New Knowledge**

These are problems that foster new knowledge within students.

*Content Learning Problems*

These are problems that have a predetermined, content-oriented outcome. Most of the time, this is what is often meant by **Problem Based Learning**. Or at least, it’s what I’ve basically meant in the past. Content Learning Problems are directly tied to a **specific standard or standards**. **Scaffolding is often planned** by the teacher ahead of time. Students work collaboratively and **plan, strategize, struggle, and are coached** toward a solution with the aid of the teacher. These may take **1-3 days**.

*Exploratory Learning Problems*

These are problems that may foster new knowledge within students, but there is not a specific content standard tied to it. Although, Mathematical **Practice standards**, such as those defined in the Common Core, or Bryan’s Habits of a Mathematician truly shine in this type of problem. The **solution and solution route may be unknown by the teacher**. Related, the teacher may not have the scaffolding planned or predetermined until the need is made manifest. There is no prescribed method toward a solution and collaborative groups may have differing solutions and solution routes. These may take **1-5 days**. Some may even call these “projects”.

**Problems for Confirmation**

These are problems intended to stand by themselves with minimal assistance or facilitation by the teacher. Students are to demonstrate the knowledge they have gained through Problems for Learning. I should note that, despite the naming convention, these problems don’t necessarily preclude learning opportunities.

*Conceptual Understanding Problems*

These are where a student puts the pieces together and begins to speak fluently about the content. If there was any confusion about the mathematical concept before, it’ll get crushed here. The **scaffolding is quite intentionally student-centered** with the specific intention of getting students to discuss the mathematics. Possibly the problem itself is more purely mathematical, or at least along the Skynet Line. Technology such as Geogebra investigations may be involved in order to solidify reasoning. These may take **roughly a day or two**.

*Assessment Problems*

Don’t tell the school district I worked for, but I once gave a single problem for my final exam of the year. It was a problem adapted from one of the Dana Center’s Assessments. I said, basically, “here’s your problem, you have two hours to show me what you’ve got. Now go!” The subtext of which was, “according to district rules, this single problem will count for 25% of your grade for the semester.” That’s basically what this kind of problem entails. Possibly **solved individually**, these problems are **tied directly to content, require some decoding**, and offer a chance for **all to excel**. I’d doubt there would be a presentation involved. Ideally (unlike in the scenario I described above), there would be some **formative feedback or revision process** before a numerical grade is attached. The point is, these are problems where students should know the content involved and be able to **explain it with great fluidity**.

=================================

So what do you think? Does this taxonomy work for you? Obviously the fine Art of Teaching necessitates that many of these types of problems overlap and intermingle. But in the design of a task, it’s important that you determine exactly what the outcomes should be. Are you constructing new mathematical knowledge within your students? Are you offering a place of creativity and non-linear thinking? Are you solidifying knowledge (Jo Boaler refers to this as “compression”)? Are you assessing understanding? Until you answer these questions, I’d suggest you can’t really fully develop the task.

In the next post, I’ll talk about what and ways to assess each of these types of problems.

Let me start by saying that I appreciate your posts because they make me reflect and think about my practice. While reading your post, I tried to think about the types of problems I’ve worked on and where they would fall in your classification system. Honestly though, I don’t know how they would fit in your taxonomy.

In every problem-based lesson I have created, I expect students to demonstrate many of the Standards for Mathematical Practice. In every lesson I expect students to have to apply their content knowledge, and if they do not have the required knowledge, I expect them to learn it, both conceptually and procedurally, so they can return to the problem and apply it. That doesn’t seem to be a neat fit anywhere.

I can say that I haven’t used any problems like your “Assessment Problems” category, but that sounds very similar to performance tasks on the CCSS assessments.

Maybe it would be helpful to take some example problems and see how they would fit on your taxonomy?

I thought about that, Robert, it’s difficult to look at a task and place in to one of these four categories. What I was/am trying to get at is the scaffolding and assessment pieces that accompany a problem. To your point, I implemented the problem I mention in the Assessment Problem section as a sort of final exam one year, and incorporated it into my Unit on quadrilaterals as a Conceptual Understanding problem the following year (though, I wasn’t naming them at the time).

Maybe this’ll be cleared up in the next post, but this naming system I think refers for to the facilitation through the problem – the scaffolding, the debrief and the assessment – rather than the task itself. Depending on your intended outcomes those three elements can look much different, even if a task itself doesn’t.

— Geoff

Pingback: Taxonomy of Problems (Part 2): Ways and what to assess | emergent math

Reading this again before I comment on your latest post, I think every problem I have done has fallen into the Content Learning Problems category. It didn’t seem so clear the first time I read it… but now it seems obvious according to your taxonomy.

Gotcha. I appreciate the feedback. I’d say *most* of the problems I’ve facilitated were of the content learning variety. A vast majority even. But occasionally I’d have a problem to assess, a problem to explore, or a problem to reinforce to buttress those content learning problems.

Hi Geoff. Great to read your thoughts. I like and agree with the idea of the taxonomy and it provides a reference for what we do. I suspect, like you and Robert, that the majority of my problems are content focused, but that’s where I usually begin. Sadly there is usually a time constraint in which we have to deliver our content here in Sydney, but there is the challenge I suppose – finding opportunities to run exploratory problems. I also agree that a taxonomy provides a framework but not a divide – many problems will have elements of each. But I like the taxonomy because it forces me to consider the target outcomes both in terms on content and what we call ‘working mathematically’ skills.

Finally, I understand what you’re aiming for with the assessment problems, but I’d argue that every problem is an assessment of something. I’ll email my version of your taxonomy (can’t find an attachment option here).

Finally finally, one of the key words you’ve used is fluidity. To me this represents the ability not only to recall and use a particular procedure or technique, but the understanding that the skills is suitable even when the context is not fully clear.

Pingback: Evaluating energy efficiency claims | emergent math

Pingback: Hot Rod Quadratics: Let’s jump this jump! | emergent math

Pingback: Week 8 Response: Geoff Krall | n+1

Pingback: When to scaffold, if at all | emergent math

Pingback: July Challenge #11 I’m a Day Behind | Reflections in the Plane