Have you told your students how much you value honest attempts at solutions to a problem? Even incorrect solutions? Then you have to assess this way.

You can’t tell students that you value their incorrect attempts at solutions when you take off points when they get an answer wrong. Worse, you can’t say you value the process over the solution when you assess using a multiple choice or short answer examination. Students are too smart and they will see right through that facade, as well they should.

*“Mr. Krall, you *say* you want us to be persistent problem solvers and you value our mathematical thinking, but you still took off half-credit for my solution attempt.”*

*“And you said the highest I can make on my re-test is a 70.”*

I’m not suggesting an “everyone gets an A” or a “crocodile in Spelling” method of assessment, but just that one needs to put a grade where their mouth is.

Similar to Dan’s “What Do You Worship?” question, I’d ask what do you, the facilitator, value? Value, both in an ethereal “boy I sure would like this!” way and a “yes, this is what you will be assessed on” way.

In the Why/How/What framework, “why” has been addressed all over the blogosphere (but here’s a thing), “how” was partially addressed in my previous assessment post. As for “what”:

*You will be assessed on your growth. You will be assessed on your persistence. You will be assessed on your various methods of solutions. You will be assessed on your communication.*

*You will not be assessed on the correctness of your answer. You will not be assessed based on the boxed number on the right side of the page. You will not be assessed using rote tasks that are easily solvable using a formula chart. *

Also, this goes beyond “I allow retakes.” Retakes is a way of saying that students have one more chance to get it right (usually accompanied by a significant numerical penalty). It’s not penalizing a student for a wrong answer whatsoever. Or at least honoring the solution attempt that isn’t actually a penalty in disguise (i.e. “partial credit”). This is a huge assessment shift, and requires a more sophisticated assessment tool than an answer key can provide (such as these).

It’s really difficult to switch gears like this in the middle of the year though. There’s a certain foundational work that needs to happen first. And frankly, it’ll probably take a few rounds of assessment before students even believe you. You’re probably not the first person to say they value honest – if incorrect – solution attempts, only to turn around and dock students in the name of “well, the SAT doesn’t allow redo’s”.

I definitely agree with assessing growth and mathematical thinking, as well as individual student progress, and I am trying to do this in a multi-level geometry class I am teaching (check back with me in May to see whether I succeeded). But I have a question for you – you said “You will not be assessed on the correctness of your answer.” Is there no extra value for actually completing a question correctly? There are many things that a student may do correctly and deserve credit for, but doesn’t that student deserve additional credit if they can actually close their mathematical deal accurately? I’m wondering what you think about this.

Hi Wendy, best of luck with the growth-measuring. Can’t wait to hear about it on your awesome blog!

Short answer: yeah, I think so. Offer full credit for students that attack a problem, regardless of the numerical outcome. Assuming the student gives a problem an honest-to-goodness attempt, I see no reason why he or she cannot receive full credit. Your probing questions make me think about offering correctness as extra credit. This of course presupposes that a teacher already fully values process over product, both outwardly and inwardly.

There is of also a caveat for the kinds of questions being asked of students. For an approach that doesn’t penalize for incorrectness, one needs to pose complex tasks and assess using something other than an answer key (such as a Problem Solving rubric). Or conceptual questions like “what could the student do to improve their work” or “how would you strategize to solve this task?” or “show two ways to solve this problem”.

An essential question is coming to light, “How does what we measure influence how we measure, and our actions?” On a recent Pythagorean Theorem assessment one student got every problem correct except for one–he forgot to square one of the legs. It was a careless mistake. Technically, his performance on the assessment wasn’t flawless, but in the grade book I recorded 100%–I do a form of SBG where meeting or exceeding the target is converted into a score of either 90%, 95%, or 100%. Was the mistake worth docking a student 5%? I thought not.

On a related note, what would happen if we turned the tables and the student became responsible for assessing his or her own growth as a learner? On a scale of 1 to 3 (1 = not yet, 2 = developing, 3 = met) I rate my persistence a___ because ___. I rate my mathematical argument a ____ because ___.

Having students monitor their own progress and reflecting on their growth is much more powerful than any external “rating service”. Then the focus of our feedback becomes more about the learning and less about the grade.

Been lurking around but finally commenting. A few thoughts:

What resonates with me is the “docking students in the name of, well, the SAT doesn’t allow redo’s.” In my little elementary school world we often begin by using manipulatives and concrete representations (example: base 10 blocks, fraction tiles, etc) then move toward pictures and finally some type of algorithm or procedure. Some kids can move along on that continuum faster than others, but because “you can’t use the (fill in the blank manipulative) on the test” teachers tend to pull the rug out from some kids faster than they should. I know that’s not exactly what you’re talking about, but it’s an example of how placing all our emphasis on performance on “the test” can corrupt instruction and practice.

Problem solving has been a focus of ours, and we have developed rubrics where kids self-assess their commitment, calculations, content, and communication (we call it “the 4 C’s”) and receive teacher feedback as well.

Finally, I know that I need to spend more time developing really good assessment questions, ones that have lots of room, ones that have low barriers to entry and high ceilings. Sometimes it’s the question that’s bad, so you get bad responses.

Geoff, thanks for all your work here. it’s been thought-provoking and inspiring.

I have been trying to emphasize learning from attempts. What I have implemented is for students to work through a set of problems. I grade them and give feedback on what they missed and what they need to fix.

They are then expected to use that feedback to learn from their mistakes and make improvements in their work. Once they have successfully done this (explaining their work) they receive full credit as a result.

They have as many attempts as they need to get to the point where they have shown they have it right, and ideally everyone would have a hundred by the end of the quarter.

At the same time, I can emphasize perseverance and learning from mistakes with some comfort that I am, to some extent, staying true to what I value.