In this blog post, we’ll explore how to get specific with math or non-math classroom issues before we develop strategies. We’ll also see an example of how to build a rubric from the ground up.
“My kids just won’t work together.”
This (or something like it) is a common complaint I hear during professional development as I encourage teachers to facilitate rich tasks via small student groups. It’s an understandable pushback and also an unhelpful one. If you’re struggling with an issue in the classroom – whether it be groupwork, late work, or anything – the first step is to Get Specific.
Let’s review the opening statement with some questions and commentary:
“My kids…” Which kids? All of them? Some of them? Nelson Muntz? How many and which kids are we talking about? Is there a commonality between these students?
“…won’t…” What do you mean by “won’t”? They don’t want to? They don’t know how? What are they doing instead of working together?
“…work together.” What does it mean to “work together?” Does it mean to check each other’s work? Discuss a problem before they move on to the next? What structures have you provided to help them work together? What prompts, aside from “work together” have you provided to help students know what it means to work together?
I provide this example to demonstrate how many of us are vague with our comments, when what we need is specificity. We can’t get better when the issue is vague and nebulous.
Sticking with the “work together” situation… this is a common issue among students (and adults). I have two children who hate groupwork more than anything in the world. It’s natural to want to work individually. Some of us are even wired that way. I have a difficult time collaborating with peers far beyond “you do this, I’ll do that.” So let’s drill down.
What are the specific things you want students to do while working together?
Do you want students to offer words of encouragement?
Do you want students to check the answer with their peers before the move to the next problem?
Do you want everyone in the group to share an idea?
Do you want students to divvy up the work?
If any of these are the case, say so and don’t just say “work together.”
(***Rubric sense starts tingling***)
In fact, let’s create a small rubric for this.
Let’s take the question around checking answers with their peers before moving to the next problem.
What is the behavior your want? Well, we just answered that in the previous sentence. Let’s make that the PROFICIENT column.
What is the current state of students? Let’s say students currently are working entirely individually such that they aren’t checking each other at all. That’s no good. Let’s put that in the far left column.
Now, what would be a stretchgoal for students? What would it look like if students were really, really checking in on each other? Maybe: Checks with peers and makes sure everyone understands before proceeding.
And now to fill in the gap: what’s the midpoint between “doesn’t check” and “checks”? I’ll toss in the modifier “occasionally” but I’m guessing the more seasoned rubric developers may have better ideas. “Occasionally” isn’t terribly descriptive. Maybe we should be specific with language like “once or twice.” But that’s what I got and we’re in the middle of a PD session right now.
Now we have a small rubric on Corroborates Solutions With Peers which is an aspect of collaboration (not the whole thing). We can even use it throughout the year – every time we work on a problem set as a class. We can get better at it. We can improve the rubric as well (let’s go ahead and change it to “once or twice” and “all”).
Because we were able to get specific about the behavior we’re trying to assess, we can now communicate and scaffold towards it. You’ll be able to document with some reliability how many students are at what specific level of this specific aspect of collaboration. I’ll admit: I haven’t offered any strategies in this blog post to treat the initial issue around students working together. But once we have specifics – and rubrics are a great way to get specifics – we can start addressing the problem areas and celebrate the bright spots.
I’d encourage you to check out some of New Tech Networks rubrics around Collaboration, Agency, and Communication for other (better) exemplars.
To wrap up this with a meta-comment, I’m realizing more and more that I don’t often know what the second step is, but the first is to understand.
As we transition back into School Mode, I’d like to offer a brief encouragement to use this school year to establish a system of student portfolios. If you’d like a “why” around this, I’ll point you to my Shadowcon Talk from a couple years ago.
If you’d prefer not to watch a video, here are the highlights:
Student portfolios allow students to demonstrate and realize their own growth over time (ok, just watch the first minute and a half of the video, up until “Damn, I’ve grown!”)
Rich tasks provide better data about what students know and can do than standardized test scores
Rich tasks better reflect our instruction and, as any follower of this blog or my twitter feed knows……
. @emergentmath "Assessment is at its best when it is ongoing and most difficult to distinguish from the teaching that is occurring."
Provided you think of it and plan a bit before the school year starts, facilitating a portfolio system is not too difficult. Here’s what you need:
Six to ten rich problem-solving tasks
A place to store student work
A tool to assess and/or have students self-reflect
A couple hours to collect the above items
Let’s take each one by one.
Six to ten rich problem solving tasks. In other words, Portfolio Problems. There are many places to find such tasks. I’ve started by asterisking problems in my Problem-Based Curriculum Maps that I think are worthy of a student portfolio. But I’m sure there are also excellent assessment items in your textbook. Yes, that’s right, your textbook.
If it helps, consider this scoring guide for Quality Tasks card for a quick check on whether or not a task is worthy. (From Necessary Conditions.)
A calendar. Put the tasks on the calendar now. Every 4-6 weeks block off a couple of days for a Portfolio Problem. You can change them later, but if they’re on the calendar, they’ll get deployed. If they aren’t, they won’t, as other seemingly more urgent business pops up. You can also build in twenty minutes of reflection and share-out time the following class period.
A place to store student work. Your options here are a file cabinet from an Army surplus store or Google Drive. (There are dozens of other options for physical work and digital work, but these are my go-to’s).
A tool to assess and/or have students self-reflect. After each problem you and/or your pupils will need to assess their work in the moment. Ideally, a you’d use a rubric with common indicators throughout the year. New Tech Network has Math rubrics (and a plethora of others, including Collaboration, Communication, and Agency) that work nicely. But feel free to use your own.
A couple hours to collect the above items. This is why we’re doing this now. Hopefully you have a couple hours of individual or departmental planning time built in to your in-service before the year starts. The most effective thing you can do with these precious hours is identify now – months in advance – the problems you’d like to serve as Portfolio Problems. Once you have those problems identified and on the calendar, there’s no stopping you.
We have the technology. We can rebuild assessment. We can make it better than it was. Better, stronger, more accurate.
We all understand how assessment has served as a destructive force in our classrooms. And we’re all to blame. While the obvious perpetrators of destructive assessment are those foisted upon us by states and districts, let’s not forget to point the finger at our own shortsighted teacher-designed tests. And the latter culprit is one we have more control over.
I didn’t take a course in assessment in college. The only assessments I knew how to create were from a schema based on the problems at the end of the chapter or the standardized exams we all loathe.
I’d like to propose a model of assessment based on resources that we have largely at the ready. Now that we’ve got all these great tasks floating around out there, let’s put them to use. Let’s do this (Martin-Kniep, G. & Picone-Zocchia, J. 2009):
. @emergentmath "Assessment is at its best when it is ongoing and most difficult to distinguish from the teaching that is occurring."
Let’s untether ourselves to the 50 question, multiple choice/short answer test and start assessing in ways that are less destructive: by posing rich (even engaging! yikes!) tasks throughout the school year, assessing in a way that honors the student thought process and allows for demonstration of growth, while at the same time compiling artifacts that would serve well as a thesis-style defense.
(n) Rich problems that serve as potential assessments worthy of a student to demonstrate proficiency in a group of standards. At the culmination of a school year, the artifacts created through the solving of such problems could constitute a demonstration of learning, either along side or in lieu of a comprehensive exam.
These problems may take a two or three days of class time when you take in to account the posing of the problem, the facilitation of the problem, workshops that need be taught or retaught, and revision of student work to proficiency.
One could pose, facilitate, and assess such a problem, say, once a “chapter” (since we secondary teachers seem to be quite beholden to chapters), perhaps eight a year. Think of it: by the end of the year, you’d have eight rich artifacts that demonstrate student learning per student. These artifacts could tell quite a story.
Perhaps the format of the artifacts are well-organized solutions on poster paper. Or perhaps they’re formal texts with figures and tables embedded. Or even a recorded presentation.
A two day Portfolio Problem may look like this.
A Portfolio Problem should:
Be accessible at some level to all students in the room
Allow for multiple solutions or multiple solution paths
Require a deep demonstration of content knowledge and adept application of content knowledge
Align to standards you have taught, potentially synthesizing multiple content standards
Let me provide an example.
This task is from Illustrative Mathematics (simply, the go to spot for CCSS-aligned tasks).
A Linear and Quadratic System
I love this task. There’s so much going on in this problem. A student must understand
how to find the equation of a linear function based on coordinates,
how to find the roots of a quadratic,
the actual meaning of a solution of a system of equations.
If a student can demonstrate competency on this task, there’s no need for me to give a test.
The Assessment: Rubrics with external standards
Assessing such problems will require a bit more than a “8/10” slapped on the top of a paper. It’s going to require a more holistic approach to assessment, namely a rubric. Moreover, since we’re assessing eight-ish portfolio problems in the course of a year, it makes sense to have a generalized rubric set to external standards. That is, one could assess two different problems using the same rubric; the rubric is not task-specific (or at least, in addition to task-specific indicators, you include general ones as well).
For example, here are the rubrics we at New Tech Network co-designed with the Stanford Center for Assessment and Learning Equity (SCALE). Here is the 12th grade math rubric. Note the generalized indicators for college-ready work.
But these particular rubrics aren’t the thing; any generalizable agreed-upon set of criteria for quality student work would do. Bryan has an excellent post from four years ago where he describes a portfolio system assessed on his agreed-upon Habits of a Mathematician. Be sure to check that post out ASAP both for the criteria he came up with and the system of portfolio/performance assessment he advocates.
The portfolio and teacher learning
I’ll leave the choice of platform for portfolio to the #edtech folks, but here are some nice recommendations from Tom Vander Ark, along with further motivation for a portfolio system. Manila folders also work well.
But now we have all these artifacts that demonstrate student learning and – provided the problems are rich enough – can expose gaps in student learning. Now we actually have work worthy of a session of learning from student work, which I’ve talked about before. The data provided by this work can be much more instructive than a spreadsheet telling you which students have answered 6-out-of-8 questions correct on HSA.APR.D.7.
You’re making it sound pretty rosy, Geoff
It’s true. I am. It’s a huge commitment. You have to take up, perhaps, twice as many days for assessment in your class than you currently offer. Perhaps three times if you want to give a more traditional assessment alongside a Portfolio Problem. It takes time to assess a Portfolio Problem against a rubric that you wouldn’t need in a scantron format. And if you’re going to use these artifacts as teacher learning opportunities (via LASW), that’s quite a lift for a math department there as well.
But, like I said, thankfully, many kindly bloggers and forward-thinking task designers have done some of the work for us: namely making quality tasks.
Which brings me to the “now what” portion of the post.
Suggested Portfolio Problems (the rest of the school year)
It’s late March (wait, really?) so we’re closer to the end of the year than the beginning. My imploration would be for teachers to undertake a year of Portfolio Problems for all the reasons mentioned above.
Still, the last few months of the school year might be a good time to get some baseline data for the 2015-16 school year with the facilitation of one Portfolio Problem. We wouldn’t be able to (or need to) get into the whole digital portfolio system now. But it would give you some reps in facilitating a rich problem, collecting the student work, and be able to compare next year to this one.
Here are some suggested Portfolio Problems that may match with late-in-the-year standards you may be tackling in class right now.
These are problems that allow for significant demonstrations of student understanding of multiple content standards, and often the synthesis between multiple standards. With all the great stuff that our colleagues have created, we have the ability to do that UT’s Dana Center has been advocating for decades now.
Suggested Portfolio Problems (going forward)
I’ve added asterisks to (what I think are) exceptional potential Portfolio Problems in the ol’ PrBL-CCSS curriculum maps. I admit it’s only my intuition that’s marking them as such. In almost all sections of the curriculum maps I was able to identify at least one problem that I felt was rich enough to be a Portfolio Problem, or at least had a kernel of potential that could fully form it into a Portfolio Problem.
Wait, so why are we doing this again?
For the student, hopefully the implementation of 6-10 Portfolio Problems throughout a year will be a positive learning experience, even though it’s assessment. It could help blur the lines between assessment and instruction (see tweet above). Also, a student could be proud of and reflect on the work they’ve accumulated over the course of the year.
For the teacher, hopefully the implementation of 6-10 Portfolio Problems throughout a year will provide a much more instructive data set to determine whether students have
truly mastered the content. It could also help with our messaging around math. We can authentically say “this is math,” rather than saying “math is fun y’all, except we have to do this really boring math to see if you’re doing it right.”
One last note: I was chatting with some teachers and principals recently and was informed that their pro-active assessment system (similar to the one I describe here) found purchase in the district. They were able to provide documented evidence of student learning using a performance assessment-style system and now they are awarded the agency from the district to not give the common standardized benchmark assessments other local schools have to give. I’m not going to suggest your school, district, or state would be amenable to such a system of demonstrated student learning in lieu of standardized tests, but it’s possible. And I’d submit that you might be surprised how open district officials might be to an alternative assessment system if presented with a convincing case.
Martin-Kniep, G. & Picone-Zocchia, J. (2009) Changing the Way You Teach: Improving the Way Students Learn
See also: Raymond’s review of Shepard’s The Role of Assessment in a Learning Culture(2000)
Citation: Martin-Kniep, G. & Picone-Zocchia, J. (2009) Changing the Way You Teach: Improving the Way Students Learn.
I retweet it because it’s a good reminder and, hey, it’s easy to miss in the never-ending scrawl of twitter. It’s so crucially important that it’s one of those things that should be shouted from the rooftops (on a regular basis, apparently). (PS: anyone know how to get rid of my dumb tweet? I already tried unchecking “remove parent tweet” but to no success.)
One of the side-benefits of transitioning to an inquiry-based, problem-based classroom is that you can slowly start to scrap those old entire-class-day-killing tests. Ideally, once you’re humming along, the Assessment Problems and Problems for Learning will be largely indistinguishable.
It took me a while to realize the power of this. It wasn’t until my final year of teaching that I have a single task to students for their final exam. Students worked on groups and developed a presentation on how to solve a particular complex task; it was assessed with a rubric, which was exactly how the class was structured throughout the year.
However, I’ll describe one thing I didn’t do that is crucial, but I need to get in to some rubric weeds.
There ought to be two sections for most assessment tools:
One thing I did not do throughout my classes that represents a huge gap in my practice was assessing against common standards of quality (“super-standards” is a term that I just made up that I need to sit with before I start using). I strictly assessed students against the particular content that was being taught at the time. “Demonstrated how this diagram proves Pythagorean’s Theorem? Great! PROFICIENT.” “Failed to simplify the quadratic into its simplest form? DEVELOPING.” What was missing was tracking growth in particular mathematical proficiencies over time. More generalized mathematical proficiencies such as “Developing a model”, “Using mathematical literary conventions”, “Representing scenarios in multple ways” that are ubiquitous across most worthwhile problems. Think Bryan’s Habits of a Mathematician. Shoot, think Common Core Standards of Mathematical Practice. By using indicators that lie outside the realm of the particular content addressed in a problem, students can demonstrate growth over time, and learn what it is to be a mathematician (and probably better articulate it).
Here’s an example of what I’m talking about: the top row is specific to this particular problem, the succeeding rows are to be assessed periodically throughout a course.
But this brings us back to equalizing the assessment and instruction. If these are the things you assess, then these are the things you have to teach. And it has to be ongoing.
Also be sure to check out Raymond’s analysis of Shepherd’s The Role of Assessment in a Learning Culture (2000). From which, I’m going to straight up crib his block quote:
“Good assessment tasks are interchangeable with good instructional tasks.”
In my last post, I tossed out a loose taxonomy to name four different types of problems:
Content Learning Problems
Conceptual Understanding Problems
I felt it necessary for myself. Up until now, I’d been labeling all problem equally: they’re problems! They’re tasks that are supposed to get students to learn stuff! But that implies a one-size-fits-all-ness that I don’t think is practical. The planning, time frame, facilitation, scaffolding, and – for our purposes in this post – assessment and wrap-up all look different, even if the task itself doesn’t look that different (after all, ideally we’re all using nonroutine problems with a low bar and a high ceiling regardless of whether it’s being used for formatively assessing student understanding or creating new knowledge).
It’s tough to throw out exact examples for assessment since we’re all working from different standards and tools. So I’m going to restrict it to the following universe of things to assess problems on: New Tech Network’s (where I work) most common Schoolwide Learning Outcomes (SWLOs) and the Common Core Standards of Mathematical Practice.
Now, different teachers and different schools I’ve worked with utilize these different halmarks differently. In fact, many schools have difficulty even defining many of these indicators of student learning, let alone assessing. But nevertheless, we’re trying to get a general look and feel to what a problem rubric would look like, depending on what you’re actually trying to accomplish from said problem. We’re talking broad-brush here.
Content Learning Problems
Things to assess: Oral Communication, Professionalism/Work Ethic, Make sense of problems and persevere in solving them, Look for and make use of structure, Look for and express regularity in repeated reasoning
This might just be personal preference, but I’d be wary of assessing content knowledge in a learning opportunity for a student. If we are distinguishing between learning and confirmation problems, we might want to more rigorously assess content on the latter. Another one of my favorite wrap-up activities is this quick check-up as an exit ticket.
Things to assess: Critical Thinking, Oral Communication, Collaboration, Model with Mathematics, Construct viable arguments and critique the reasoning of others, Use appropriate tools strategically
Assuming that the time-frame is a bit longer for an exploratory problem, and that the solutions and solution routes are varying, the wrap-up could consist of a formal presentation, followed by panel-style questioning.
Conceptual Understanding Problems
Things to assess: Critical Thinking, Collaboration, Written Communication, Reason abstractly and quantitatively, Construct viable arguments and critique the reasoning of others, Look for and make use of structure, Look for and express regularity in repeated reasoning
Here, I think it makes sense to have students reflect on and communicate what they’ve learned.
Things to assess: Critical Thinking, Written Communication, Reason abstractly and quantitatively, Use appropriate tools strategically, Attend to precision
In this case, one can easily envision a rubric that assesses the items above. Assuming these tasks are a bit more individualized, a written piece – almost like the free response section of an AP exam – might make sense. I’ll leave it up to the reader’s discretion whether or not to allot numerical point values.
With these self-recommendations in hand, we can more easily (hopefully!) pick and chose what would go in a rubric and where, if a rubric is one of the tools in your toolbox.
Again, the idea is to make things easier, not more complex. And to better target outcomes for each and every problem. From these recommendations we might be able to construct a loose, lean problem planning template that is directly tied to the indicators you’re trying to peg with a particular problem. Maybe even some planned facilitation and scaffolding moves as well.