I’ve made it a habit to retweet this once a month or so from Jenn (@DataDiva) who I look up to as a leader in the field of teacher- and student-friendly assessment.

. @emergentmath “Assessment is at its best when it is ongoing and most difficult to distinguish from the teaching that is occurring.”

— Jenn Borgioli (@DataDiva) November 18, 2013

.twt-reply{display:none;}

Citation: Martin-Kniep, G. & Picone-Zocchia, J. (2009) *Changing the Way You Teach: Improving the Way Students Learn*.

I retweet it because it’s a good reminder and, hey, it’s easy to miss in the never-ending scrawl of twitter. It’s so crucially important that it’s one of those things that should be shouted from the rooftops (on a regular basis, apparently). (PS: anyone know how to get rid of my dumb tweet? I already tried unchecking “remove parent tweet” but to no success.)

One of the side-benefits of transitioning to an inquiry-based, problem-based classroom is that you can slowly start to scrap those old entire-class-day-killing tests. Ideally, once you’re humming along, the **Assessment Problems** and **Problems for Learning** will be largely indistinguishable.

It took me a while to realize the power of this. It wasn’t until my final year of teaching that I have a single task to students for their final exam. Students worked on groups and developed a presentation on how to solve a particular complex task; it was assessed with a rubric, which was exactly how the class was structured throughout the year.

However, I’ll describe one thing I didn’t do that is crucial, but I need to get in to some rubric weeds.

There ought to be two sections for most assessment tools:

One thing I did not do throughout my classes that represents a huge gap in my practice was assessing against common standards of quality (“**super-standards**” is a term that I just made up that I need to sit with before I start using). I strictly assessed students against the particular content that was being taught at the time. “Demonstrated how this diagram proves Pythagorean’s Theorem? Great! **PROFICIENT**.” “Failed to simplify the quadratic into its simplest form? **DEVELOPING**.” What was missing was tracking growth in particular mathematical proficiencies over time. More generalized mathematical proficiencies such as “Developing a model”, “Using mathematical literary conventions”, “Representing scenarios in multple ways” that are ubiquitous across most worthwhile problems. Think Bryan’s Habits of a Mathematician. Shoot, think Common Core Standards of Mathematical Practice. By using indicators that lie outside the realm of the particular content addressed in a problem, students can demonstrate growth over time, and learn what it is to be a mathematician (and probably better articulate it).

Here’s an example of what I’m talking about: the top row is specific to this particular problem, the succeeding rows are to be assessed periodically throughout a course.

But this brings us back to equalizing the assessment and instruction. If these are the things you assess, then these are the things you have to teach. And it has to be ongoing.

Also be sure to check out Raymond’s analysis of Shepherd’s *The Role of Assessment in a Learning Culture* (2000). From which, I’m going to straight up crib his block quote:

“Good assessment tasks are interchangeable

with good instructional tasks.”

Geoff, thanks for the great post. I’ve been thinking a lot about assessment over the past two to three years, and this year I finally feel like I’m making meaningful progress. Then I read a post like this and I’m worrying/wondering if I’m barking up the wrong tree. My classroom is definitely not an inquiry-based environment, nor PrBL or PBL, though I’m trying to bring elements of all three of those approaches into my room. I’m what I would describe as a “recovering direct instructionist,” and am shifting my practice one step at a time to catch up (will it ever?) with my recent philosophical changes.

At any rate, here I am, approaching the end of my third year of SBG in Algebra 1, second year of SBG in Calculus, and first year of SBG in Algebra 2 and Precalculus, and I’m happy with the direction things are moving, and then… Again, your post has me wondering: Am I focused on the wrong things, polishing a you-know-what, or is my current process of assessment reflection and revision leading me, slowly but steadily, in the right direction?

This rambling comment is more a way for me to “reflect aloud” about the issues your post raised for me, so feel free to ignore most of the sentences that end in question marks. They’re mostly for me. On the other hand, if you have any additional insight to share, I’d read it in a heartbeat. :)

Thanks again for a thought-provoking post!

Cheers!

P.S. This might be what you’re looking for: https://dev.twitter.com/discussions/9514

Thanks for the kind words, Michael. And shoot, this entire blog is a think-aloud.

It’s definitely not an “either-or” situation: you’re doing inquiry or you’re not. Even in my “inquirest” I direct-taught many lessons. And I never did SBG (hadn’t even heard of it, though it totally resonates with what I believe about how children learn). Think of it more of a spectrum.

It’s really hard to shift assessment practices in mid-year, and maybe even inadvisable. The whole point of assessing “super-standards” is to track growth in students which would be more difficult to do in the second semester alone.

I’d again point to one of Bryan’s posts on his portfolio assessment system, which I think meshes quite nicely with SBG: http://www.doingmathematics.com/2/post/2012/05/habits-of-a-mathematician-portfolio-assessment.html

I’ll also re-plagiarize my pal Jim May (@jimamay) from this post: http://emergentmath.com/2013/07/29/toward-changing-teacher-practice-and-mindsets/

*** Be Fanatical about the End and Endlessly Flexible about what it Might Look Like to Get There ***

Pingback: Equalizing Practice and Assessment (Part 2): What You Value Should Be What You Assess | emergent math

Pingback: Assessment via audibles: OMAHA! OMAHA! | emergent math