4 comments on “Portfolio Problems: Rebuilding Assessment with Rich Tasks”

Portfolio Problems: Rebuilding Assessment with Rich Tasks

We have the technology. We can rebuild assessment. We can make it better than it was. Better, stronger, more accurate.

We all understand how assessment has served as a destructive force in our classrooms. And we’re all to blame. While the obvious perpetrators of destructive assessment are those foisted upon us by states and districts, let’s not forget to point the finger at our own shortsighted teacher-designed tests. And the latter culprit is one we have more control over.

I didn’t take a course in assessment in college. The only assessments I knew how to create were from a schema based on the problems at the end of the chapter or the standardized exams we all loathe.

I’d like to propose a model of assessment based on resources that we have largely at the ready. Now that we’ve got all these great tasks floating around out there, let’s put them to use. Let’s do this (Martin-Kniep, G. & Picone-Zocchia, J. 2009):

Let’s untether ourselves to the 50 question, multiple choice/short answer test and start assessing in ways that are less destructive: by posing rich (even engaging! yikes!) tasks throughout the school year, assessing in a way that honors the student thought process and allows for demonstration of growth, while at the same time compiling artifacts that would serve well as a thesis-style defense.

Portfolio Problems

(n) Rich problems that serve as potential assessments worthy of a student to demonstrate proficiency in a group of standards. At the culmination of a school year, the artifacts created through the solving of such problems could constitute a demonstration of learning, either along side or in lieu of a comprehensive exam.

These problems may take a two or three days of class time when you take in to account the posing of the problem, the facilitation of the problem, workshops that need be taught or retaught, and revision of student work to proficiency.

One could pose, facilitate, and assess such a problem, say, once a “chapter” (since we secondary teachers seem to be quite beholden to chapters), perhaps eight a year. Think of it: by the end of the year, you’d have eight rich artifacts that demonstrate student learning per student. These artifacts could tell quite a story.

Perhaps the format of the artifacts are well-organized solutions on poster paper. Or perhaps they’re formal texts with figures and tables embedded. Or even a recorded presentation.

A two day Portfolio Problem may look like this.

Portfolio Problems Agenda.001

A Portfolio Problem should:

  • Be accessible at some level to all students in the room
  • Allow for multiple solutions or multiple solution paths
  • Require a deep demonstration of content knowledge and adept application of content knowledge
  • Align to standards you have taught, potentially synthesizing multiple content standards

Let me provide an example.

This task is from Illustrative Mathematics (simply, the go to spot for CCSS-aligned tasks).

A Linear and Quadratic System

Screen Shot 2016-03-21 at 9.50.15 PM

I love this task. There’s so much going on in this problem. A student must understand

  • how to find the equation of a linear function based on coordinates,
  • how to find the roots of a quadratic,
  • the actual meaning of a solution of a system of equations.

If a student can demonstrate competency on this task, there’s no need for me to give a test.

The Assessment: Rubrics with external standards

Assessing such problems will require a bit more than a “8/10” slapped on the top of a paper. It’s going to require a more holistic approach to assessment, namely a rubric. Moreover, since we’re assessing eight-ish portfolio problems in the course of a year, it makes sense to have a generalized rubric set to external standards. That is, one could assess two different problems using the same rubric; the rubric is not task-specific (or at least, in addition to task-specific indicators, you include general ones as well).

For example, here are the rubrics we at New Tech Network co-designed with the Stanford Center for Assessment and Learning Equity (SCALE). Here is the 12th grade math rubric. Note the generalized indicators for college-ready work.

But these particular rubrics aren’t the thing; any generalizable agreed-upon set of criteria for quality student work would do. Bryan has an excellent post from four years ago where he describes a portfolio system assessed on his agreed-upon Habits of a Mathematician. Be sure to check that post out ASAP both for the criteria he came up with and the system of portfolio/performance assessment he advocates.

The portfolio and teacher learning

I’ll leave the choice of platform for portfolio to the #edtech folks, but here are some nice recommendations from Tom Vander Ark, along with further motivation for a portfolio system. Manila folders also work well.

But now we have all these artifacts that demonstrate student learning and – provided the problems are rich enough – can expose gaps in student learning. Now we actually have work worthy of a session of learning from student work, which I’ve talked about before. The data provided by this work can be much more instructive than a spreadsheet telling you which students have answered 6-out-of-8 questions correct on HSA.APR.D.7.

You’re making it sound pretty rosy, Geoff

It’s true. I am. It’s a huge commitment. You have to take up, perhaps, twice as many days for assessment in your class than you currently offer. Perhaps three times if you want to give a more traditional assessment alongside a Portfolio Problem. It takes time to assess a Portfolio Problem against a rubric that you wouldn’t need in a scantron format. And if you’re going to use these artifacts as teacher learning opportunities (via LASW), that’s quite a lift for a math department there as well.

But, like I said, thankfully, many kindly bloggers and forward-thinking task designers have done some of the work for us: namely making quality tasks.

Which brings me to the “now what” portion of the post.

Suggested Portfolio Problems (the rest of the school year)

It’s late March (wait, really?) so we’re closer to the end of the year than the beginning. My imploration would be for teachers to undertake a year of Portfolio Problems for all the reasons mentioned above.

Still, the last few months of the school year might be a good time to get some baseline data for the 2015-16 school year with the facilitation of one Portfolio Problem. We wouldn’t be able to (or need to) get into the whole digital portfolio system now. But it would give you some reps in facilitating a rich problem, collecting the student work, and be able to compare next year to this one.

Here are some suggested Portfolio Problems that may match with late-in-the-year standards you may be tackling in class right now.

Grade 4

Grade 5

Grade 6

Grade 7

Grade 8

Algebra 1 / Grade 9

Geometry / Grade 10

Algebra 2 / Grade 11

These are problems that allow for significant demonstrations of student understanding of multiple content standards, and often the synthesis between multiple standards. With all the great stuff that our colleagues have created, we have the ability to do that UT’s Dana Center has been advocating for decades now.

Suggested Portfolio Problems (going forward)

I’ve added asterisks to (what I think are) exceptional potential Portfolio Problems in the ol’ PrBL-CCSS curriculum maps. I admit it’s only my intuition that’s marking them as such. In almost all sections of the curriculum maps I was able to identify at least one problem that I felt was rich enough to be a Portfolio Problem, or at least had a kernel of potential that could fully form it into a Portfolio Problem.

Grade_8_CCSS_PrBL_Curriculum_Map_-_Google_Docs

Wait, so why are we doing this again?

For the student, hopefully the implementation of 6-10 Portfolio Problems throughout a year will be a positive learning experience, even though it’s assessment. It could help blur the lines between assessment and instruction (see tweet above). Also, a student could be proud of and reflect on the work they’ve accumulated over the course of the year.

Screen Shot 2016-03-22 at 9.59.35 AMFor the teacher, hopefully the implementation of 6-10 Portfolio Problems throughout a year will provide a much more instructive data set to determine whether students have
truly mastered the content. It could also help with our messaging around math. We can authentically say “this is math,” rather than saying “math is fun y’all, except we have to do this really boring math to see if you’re doing it right.”

One last note: I was chatting with some teachers and principals recently and was informed that their pro-active assessment system (similar to the one I describe here) found purchase in the district. They were able to provide documented evidence of student learning using a performance assessment-style system and now they are awarded the agency from the district to not give the common standardized benchmark assessments other local schools have to give. I’m not going to suggest your school, district, or state would be amenable to such a system of demonstrated student learning in lieu of standardized tests, but it’s possible. And I’d submit that you might be surprised how open district officials might be to an alternative assessment system if presented with a convincing case.

==

Martin-Kniep, G. & Picone-Zocchia, J. (2009) Changing the Way You Teach: Improving the Way Students Learn

See also: Raymond’s review of Shepard’s The Role of Assessment in a Learning Culture(2000)

6 comments on “Equalizing practice and assessment”

Equalizing practice and assessment

I’ve made it a habit to retweet this once a month or so from Jenn (@DataDiva) who I look up to as a leader in the field of teacher- and student-friendly assessment.

.twt-reply{display:none;}
Citation: Martin-Kniep, G. & Picone-Zocchia, J. (2009) Changing the Way You Teach: Improving the Way Students Learn.

I retweet it because it’s a good reminder and, hey, it’s easy to miss in the never-ending scrawl of twitter. It’s so crucially important that it’s one of those things that should be shouted from the rooftops (on a regular basis, apparently). (PS: anyone know how to get rid of my dumb tweet? I already tried unchecking “remove parent tweet” but to no success.)

One of the side-benefits of transitioning to an inquiry-based, problem-based classroom is that you can slowly start to scrap those old entire-class-day-killing tests. Ideally, once you’re humming along, the Assessment Problems and Problems for Learning will be largely indistinguishable.

Image

It took me a while to realize the power of this. It wasn’t until my final year of teaching that I have a single task to students for their final exam. Students worked on groups and developed a presentation on how to solve a particular complex task; it was assessed with a rubric, which was exactly how the class was structured throughout the year.

However, I’ll describe one thing I didn’t do that is crucial, but I need to get in to some rubric weeds.

There ought to be two sections for most assessment tools:

Image

One thing I did not do throughout my classes that represents a huge gap in my practice was assessing against common standards of quality (“super-standards” is a term that I just made up that I need to sit with before I start using). I strictly assessed students against the particular content that was being taught at the time. “Demonstrated how this diagram proves Pythagorean’s Theorem? Great! PROFICIENT.” “Failed to simplify the quadratic into its simplest form? DEVELOPING.” What was missing was tracking growth in particular mathematical proficiencies over time. More generalized mathematical proficiencies such as “Developing a model”, “Using mathematical literary conventions”, “Representing scenarios in multple ways” that are ubiquitous across most worthwhile problems. Think Bryan’s Habits of a Mathematician. Shoot, think Common Core Standards of Mathematical Practice. By using indicators that lie outside the realm of the particular content addressed in a problem, students can demonstrate growth over time, and learn what it is to be a mathematician (and probably better articulate it).

Here’s an example of what I’m talking about: the top row is specific to this particular problem, the succeeding rows are to be assessed periodically throughout a course.

But this brings us back to equalizing the assessment and instruction. If these are the things you assess, then these are the things you have to teach. And it has to be ongoing.

Also be sure to check out Raymond’s analysis of Shepherd’s The Role of Assessment in a Learning Culture (2000). From which, I’m going to straight up crib his block quote:

“Good assessment tasks are interchangeable
with good instructional tasks.”

3 comments on “Taxonomy of Problems (Part 2): Ways and what to assess”

Taxonomy of Problems (Part 2): Ways and what to assess

In my last post, I tossed out a loose taxonomy to name four different types of problems:

  • Content Learning Problems
  • Exploratory Problems
  • Conceptual Understanding Problems
  • Assessment Problems

I felt it necessary for myself. Up until now, I’d been labeling all problem equally: they’re problems! They’re tasks that are supposed to get students to learn stuff! But that implies a one-size-fits-all-ness that I don’t think is practical. The planning, time frame, facilitation, scaffolding, and – for our purposes in this post – assessment and wrap-up all look different, even if the task itself doesn’t look that different (after all, ideally we’re all using nonroutine problems with a low bar and a high ceiling regardless of whether it’s being used for formatively assessing student understanding or creating new knowledge).

It’s tough to throw out exact examples for assessment since we’re all working from different standards and tools. So I’m going to restrict it to the following universe of things to assess problems on: New Tech Network’s (where I work) most common Schoolwide Learning Outcomes (SWLOs) and the Common Core Standards of Mathematical Practice.

things to assess

Now, different teachers and different schools I’ve worked with utilize these different halmarks differently. In fact, many schools have difficulty even defining many of these indicators of student learning, let alone assessing. But nevertheless, we’re trying to get a general look and feel to what a problem rubric would look like, depending on what you’re actually trying to accomplish from said problem. We’re talking broad-brush here.

Content Learning Problems

Things to assess: Oral Communication, Professionalism/Work Ethic, Make sense of problems and persevere in solving them, Look for and make use of structure, Look for and express regularity in repeated reasoning

This might just be personal preference, but I’d be wary of assessing content knowledge in a learning opportunity for a student. If we are distinguishing between learning and confirmation problems, we might want to more rigorously assess content on the latter. Another one of my favorite wrap-up activities is this quick check-up as an exit ticket.

Exploratory Problems

Things to assess: Critical Thinking, Oral Communication, Collaboration, Model with Mathematics, Construct viable arguments and critique the reasoning of others, Use appropriate tools strategically

Assuming that the time-frame is a bit longer for an exploratory problem, and that the solutions and solution routes are varying, the wrap-up could consist of a formal presentation, followed by panel-style questioning.

Conceptual Understanding Problems

Things to assess: Critical Thinking, Collaboration, Written Communication, Reason abstractly and quantitatively, Construct viable arguments and critique the reasoning of others, Look for and make use of structure, Look for and express regularity in repeated reasoning

Here, I think it makes sense to have students reflect on and communicate what they’ve learned.

Assessment Problems

Things to assess: Critical Thinking, Written Communication, Reason abstractly and quantitatively, Use appropriate tools strategically, Attend to precision

In this case, one can easily envision a rubric that assesses the items above. Assuming these tasks are a bit more individualized, a written piece – almost like the free response section of an AP exam – might make sense. I’ll leave it up to the reader’s discretion whether or not to allot numerical point values.

============================

With these self-recommendations in hand, we can more easily (hopefully!) pick and chose what would go in a rubric and where, if a rubric is one of the tools in your toolbox.

Again, the idea is to make things easier, not more complex. And to better target outcomes for each and every problem. From these recommendations we might be able to construct a loose, lean problem planning template that is directly tied to the indicators you’re trying to peg with a particular problem. Maybe even some planned facilitation and scaffolding moves as well.