When I was a junior in college, I did some part time work tutoring students in writing through my university’s literacy center. I was working with a ninth grade student, Allan, and he was concerned about how his writing ability was affecting his grade. As we we were getting to know one another, many of my questions focused on what his teacher expected from Allan and his classmates when it came to writing.

Me: So, your classmates are getting high marks, but you’re struggling. Does your teacher go over what he expects on your papers?

Allan: Well, he goes over the rubric and . . .

Me: Hang on. Rubric? What’s that?

Allan: It’s like, a thing… a paper my teacher gives us to tell us how we’re doing with our writing, what he expects, you know?

Me: . . .

That’s right, as a junior in college, this was the first time I was becoming aware of this measuring tool. Now, to be fair to my teachers up until that time, they were using clear assessment tools to evaluate my writing and give me feedback–I remember checklists full of writing features that my teachers would tick off as they each on in my papers, and at the end I would get a certain amount of points based on my performance. But this was the first time I had to pay attention because my student’s grade depended to what extent he could calibrate his writing to the rubric.

I helped Allan work his way to receiving top marks in his class according to the scoring guide his teacher gave the class. It was very satisfying to watch him succeed because of our collaboration, and one of the many formative experiences that lead me to pursuing a career in teaching ELA at the high school level. But, soon after I started teaching, assessing student writing according to rubrics nearly caused me to leave the profession.

My First Experiences Using Rubrics to Assess Student Writing

Let me take you my first year of teaching, where I was working at a school that had writing portfolios for each student. Entering that environment, I found myself knee deep in rubrics my site had developed. We were also teaching from a curriculum that provided rubrics. Helpful for a young teacher, right?

The rubrics first handed to me as a teacher were analytic, which meant that evaluation happened inside a lot of boxes. My credential program used rubrics to assess me as a student, but I was not taught how to use them as a teacher. Right off the bat, I was expected to assess student writing according to these rubrics. Truth be told, I was overwhelmed.

My first experience assessing writing with an analytic rubric was brutal. Remember above when I said that I almost quit because of rubrics? As I get into this experience, I need to first tell you about the worst piece of advice I have ever received as a teacher (I know, this is turning into a story within a story, but go with me on this).

In a you’re-new-this-district orientation all new hires must attend, we were going around the table and talking about some aspect of teaching that we were nervous about. I shared that I was overwhelmed about writing a syllabus. And then it happened. The worst piece of advice I ever received entered the conversation: make sure that what ever you put in there can hold up in court because you might end up there some day. I don’t even remember the look of the man who uttered this dehumanizing legal perspective, but for some reason (maybe because I was a scared and vulnerable first-year teacher) this bit of advice sank down deep inside and was the beginning of my stress as an English Language Arts teacher. It especially impacted me when it came to evaluating student writing.

I could only articulate it years later, but what happened every time I sat down to evaluate student writing with an analytic rubric was a slow burning, psychological torment. That may sound like hyperbole, but I have the gray hairs, premature receding hairline, and the “Dang, you’re only in your thirties,” comments from my students to prove it. In recent years I have been able to describe it the following way: when I sit down to assess student work, I feel a lot of pressure to “get it right.”

Since most assessment practice has teachers hunting for what students did wrong, I never wanted to get caught making an error about my students… well, errors. And, since everything I did as a teacher had to “hold up in court,” I was developing an unhealthy level of stress as a young teacher. I recently described it this way:

Try to picture me hunched over a stack of student work in a dimly light room. I’m sweating because behind me is this dickensian specter in a black hood, arms crossed, holding a gavel in one hand. As I am trying to give my students good feedback about their work, I am constantly glancing over my shoulder, just waiting for the ghost of assessment present to wallop me with that gavel.

And this is how I felt every time I graded student work.

Let’s get back to the time I first assessed student writing according to the complicated, boxy scoring guide. I wrote comments all over their papers in red ink, circled the parts of the rubric that best corresponded to what they produced, and then gave the assignments back to the students. Can you guess what they did? They looked at the scoring mark and tossed the essay aside.

So, all that self-inflicted anguish to get the assessment just right and help my students toward becoming better writers, all while trying to stay out of a courtroom, was tossed out like used Kleenex. At least, that’s how it appeared on the surface. What I didn’t realize at the time was that my students had their own anxieties about how their writing was being assessed. If I had “bad feels” assessing my students’ writing as their teacher, they were having their own negative experience about the evaluation of their work. Hang in there and we’ll get to that in a bit.

Assessing Deficits: Have We Been Trained to Aggressively Look for Errors?

My journey with rubrics and writing assessment (well, assessment in general) started in a bit of a dark place. At least, that’s how I have painted it here. The language of every rubric that has been handed to me has the assessor looking for errors.

About a year ago, I wrote the post “Does Your Rubric Punish Students?” when I described the following:

Reading [a typical analytic rubric], from left to right, communicates deficit. If a student understands that the good stuff is on the left-hand side of the rubric, and the teacher marked a lower score, their sense in their own competence will wane as they scan from left to right to discover which box was marked. With all of the positive adjectives on the left and increasingly (or is it decreasingly?) condescending adjectives on the right, the communication is subtle: you must be perfect to maintain a top mark.

But none of us are perfect. We also know that none of our students are perfect. Yet the design of our scoring tool continues to reinforce this myth that that students need to be perfect. If they’re not perfect, then they are losers. This does not focus on a student’s growth, but focuses on their inability to accomplish a task at the level of proficiency.

And here was the picture that accompanied the above paragraphs in that post:

Rubric Good to Bad

And I would like to note that when an ELA teacher is using a rubric to assess student work, any scores or grades that come from that rubric will be taken as an evaluation of the students’ work AND their thinking. Evaluations are judgments; they are terminal, final. And it doesn’t matter if we call the assessment formative or summative, many of our students will experience it as summative, communicating our final judgments about their work and their thinking. These students believe the teacher is extending those evaluations beyond the work to their worth as a person. This means the students believe, “When it comes to this class, the teacher thinks I am smart/mediocre/stupid.”

Traditional rubrics put assessors on an “error hunt,” and not enough teachers talk about this mentality. Those scoring tools set us up to look for what students are doing wrong, making the top criteria the only praise-worthy goal. They are not about improvement. They are only about delivering a final judgment on a piece of writing.

In the next section, I am going to take you through the shifts that I have been making in the past decade, starting with a rubric initially designed to assess college entrance exams in a number of California universities. Before I dive in, I want to state a disclaimer upfront. I sincerely believe that the architects of these rubrics had the best intentions in their original design. I also believe it was never intended to be seen by students, only an evaluation tool seen by members of the institution that created it. Additionally, teachers with the best intentions for their students (people like me) have used this rubric for years to help make our students better writers without seeing how it was doing harm to their identity as growing writers.

From Rubric to Feedback Guide

What you will read below is the story of how, over the past seven years, have slowly developed a different approach to writing assessment. I have moved away from scoring by rubric to giving as much actionable feedback as I can.

Here is the rubric I started with:

Screen Shot 2019-05-11 at 10.11.39 AM

If you’re eyes glossed over and you skipped reading it, GOOD! For the sake of time and clarity, let’s just focus in on a couple of features:

  • High scores are on the left, low scores are on the right.
  • The word “superior” to describe the high score (which I find distasteful).
  • The low score is labeled “very weak” (which may feel pretty harsh, but this is a revised version of the rubric–it used to be labeled “incompetent,” and if you really have to see it for yourself, click HERE).

Before moving on, let’s take a look only at the “quality and clarity of thought” row. You will notice that there is discussion of assets in the first two columns, but the language gives itself over to error hunting, ending with “unfocused, illogical, or incoherent.” These days, I have difficulty with the thought of giving this evaluation to a young person in the process of learning how to develop their skill set and thinking as a writer.

Stepping back and looking at that analytic rubric as a whole, you can see there are 36 boxes. I’ll admit it. After using this rubric for eight years, I cannot keep all those categories in my head. It’s impossible. Also, only 6 of 36 are the scores hardworking students want to receive. That means there are 30 different ways to fail, or at least not measure up.

After growing weary of this assessment tool, I made some adjustments. They were mostly for me (the teacher) because I was overwhelmed with all the different ways to score an essay. I was still thinking inside those boxes, but I wanted to use an assessment tool that was more affirming. When assigning a source-based argumentative essay, I came up with the following:

Screen Shot 2019-07-31 at 12.17.28 PM

Here are what I see as the improvements (which are minor, and I still have a lot of problems with this rubric):

  • The language about deficits had softened somewhat.
  • Down from 36 boxes to 24.
  • Defined clear, bottom-line parameters.
  • Students have the chance to resubmit (though I am wincing at the thought of punishing revisions through reduction of points).

Though I preferred this version to the previous one, all my criticisms still remained. But one day, as fate would have it (or the algorithms that run social media would have it), I chanced across one of Kelly Gallagher’s tweets about moving away from the use of rubrics:

And with that tip, I had my Spring Break reading lined up (aside: I accidentally bought the other Maja Wilson title, Rethinking Rubrics in Writing Assessment, and I was completely blown away by it, which lead to radical change in how I mark student writing).

As Maja Wilson’s book was helping me overhaul my thinking about marking student writing, I discovered the Single Point Rubric (SPR) through Jennifer Gonzalez (@cultofpedagogy) on the Cult of Pedagogy website (If you aren’t familiar with the SPR, then take a minute to click over to Jenn’s site and read up on it). This post–and all the pictures of teachers using this tool that I could scrape up online–helped me apply the insights gained from Wilson’s book.

Returning to the rubric discussed above, this is how I initially reshaped the rubric:

Screen Shot 2019-07-31 at 12.44.29 PM.png

This was a welcomed breath of fresh air. If students met the success criteria, then I would check it off the box in the target column. If they did better, I would leave an affirming comment, then point them toward a next step (I know, “above target” is technically missing it, but we all understood and didn’t make a big deal about it). If they were not hitting the target, in the left-hand column I would give the students actionable feedback about what they could do to reach the target.

Overall, this made assessment much smoother for everyone, and the rubric communicated improvement, but I had one big problem with it: feedback real estate. There was not enough room for some of the comments I wanted to leave. If I commented on the left, the box on the right sat empty. So, I went back to the drawing board.

[Side note: during this journey toward a better writing assessment tool, after lots of reading and connecting with other thoughtful educators, I decided to remove points from my assessment practice in the classroom (for all assignments, tests, and quizzes). Instead, students put in purposeful effort, receive feedback, take action based on the feedback, reflect, and conference with me at end-of-term in order to earn a letter grade. See more about what I do HERE]

After coming up with another iteration of this rubric, I snapped a pic and pitched it out on Twitter to get some feedback. I received so much help from people in the #NCTEVillage community, all of which went into shaping the latest version of this rubric. But one perspective-altering voice emerged over others. Juilia Fliss (@JuliaFliss) suggested changing the success criteria into questions, and a whole new world opened up in that moment. When I re-imagined the success criteria as questions, I found that I was no longer looking for errors. Instead, I was looking for where students were demonstrating success!

JJ Single Point Feedback Guide (asset-based)

[If you would like a downloadable template of this feedback back guide, find it for free HERE | If you find it useful, please subscribe to Make Them Master It]

I took everything I had learned over the years, all of the challenging perspectives, and I came up with what you see above. It has all the features I have come appreciate:

  • Success criteria as the single point of assessment
  • Plenty of room for comments
  • Criteria framed as questions
  • Check boxes to inform students of their level of achievement
  • No penalties for redoing work

This version completely shifted how I approached each student’s essay. For instance, in the “Understanding and Use of Sources” section, I would record what I found, and then explain what needed to be added to make it an excellent application of the standard. In particular, I found myself leaving this comment several times:

“Good job including citations from your sources and transitioning into most of them. It was clear that your argument was grounded in your research. Instead of explaining how information from each source contributes to your claim one at a time, next time be sure to synthesize information from several sources at once, showing how the texts are in conversation with one another.”

That is so much better than circling a box that reads, “demonstrates a limited understanding of each cited source, or makes poor use of each cited source, in
attempting to synthesize multiple sources.” That’s not helpful to the student. It only helps me as the teacher to justify the score I gave, not help a student improve in his or her writing.

Since I wasn’t tracking errors or points earned any more, the final step I took in this process was rebranding this assessment tool as a “feedback guide.” And using this feedback guide, my students have started to understand what they are doing well and what they needed to work on to improve. It has worked to guide me toward commenting on the good in their work AND promote their growth, which is precisely what I have been trying to do for the past 15 years of my career.

My Current Perspective on Writing Assessment

Too much of our assessment practice puts teachers in the position of looking for errors. In addition, when we have scoring guides that break down academic behaviors into what students are NOT doing on an assignment, any comments left for students will be focused on teachers justifying the score they chose to give, not on feedback that will help the students improve.

Teachers are acting in good faith when assessing student work (I refuse to believe that teachers set out with the explicit motive to harm students through use of a rubric). But teachers have been given poor tools, bad counsel, and coercive directives to assess students writing according to a deficit model (If you’re interested in reading more on deficit model, then read this article at Edutopia). I fear this is harming our developing writers.

It’s time we started thinking through how we can approach assessment in a way that builds assets in each student. This will cause a shift in our approach because we would need to reimagine how we score student work. Many teachers are rethinking their assessment practice through use of Standards-Based Grading (see @mrsbyarshistory‘s posts about SBG HERE), which tends to focus on the development of the student.

Here’s a thought, maybe it would help if we start to separate the concept of “assessment” from the concept of scores and points. Those two don’t have to go together out of necessity. With that in mind, there are some great resources at teachersgoinggradeless.com that I encourage everyone to give a chance. Here’s a piece from my good friend Deanna Lough that discusses authentic assessment.

Further Reading

If you’re interested in further reading on the topics of rubrics, assessment, or grading here are some other posts:

Final Note (Really)

I believe that analytic rubrics have their place, but I am increasingly of the conviction that they should not be given to the students. Analytic rubrics are sorting tools. They can be helpful when our intent, as a group of teachers, is to uncover trends in our students’ capacity. This can help a group of educators better identify areas of instructional need. But the only assessment tool, in my opinion, my students need to see is the feedback guide that helps them get better.


Pres and Wshop Pic 1

Before you go, I wanted to inform you that offer presentations and workshops for teachers, and I would be thrilled to come present to the good people in your organization. Above is a picture of me presenting at the National School Transformation Conference to room full of teachers, principals, and district-level leaders. See what I have to offer ==> HERE <== .


Stay connected with everyone following Make Them Master It, and receive notifications of each new post by email.

Join 4,345 other subscribers

QUESTION: Apart from rubrics, what other alternatives are out there for assessing students?

One thought on “Asset-Building Assessment: From Degrading Rubrics to Actionable Feedback Guides

  1. I loved this article – I’ve done a number of sessions I called “Death to Rubrics” and have followed Ms. Wilson for years.

    A couple of thoughts:

    In the more than 20 Middle Years classes where I serve as a mentor, we focus on “coaching” before assessing. We find the best feedback for our students is not via the final piece, but “In the Middle” (I asked Nancie Atwell once if that was perhaps the hidden meaning in her books – she laughed) of the piece. I’m convinced that is where the best teaching occurs – like the coach who can make adjusttments in the game, before looking at the videotape afterwards. There is still learning that way – but not as meaningful.

    I’m also proud of what we’ve done with assessment, which is collaborative, using the same principles as in defending a Masters Thesis. I’ve adapted Linda Rief’s ideas of Process Paper and we conduct Oral Assessments to go with their portfolios of work. Students come prepared to “defend” their work by looking over the Key Lessons we have studied all year.

    But perhaps the key ingredient that is different is how our students don’t always write using common forms (in fact, rarely). Seems like all the assessment tools here are based on the idea that it’s a common piece – and the essays are very limited (with the focus on Works Cited). In that paradigm the obvious inclination is to “rank and sort”, which according to Maja is the original reason why rubrics were created.

    Fortunately in Manitoba our ELA curriculum is not so restrictive. As a result, with different forms but common writing qualities (for lack of a better term), we can have students “defend” their papers by referring to their use of the Key Writing Lessons we shared with them. Our assessments – some of which we have video taped – are celebrations. Rubrics are replaced by the lessons we decide are necessary to the form used: e.g., What Makes A Great Essay? would include things like Hooks, Thesis (stated or implied and why), Powerful endings and possible use of Echo), etc. The assessment interview then focuses on how well the student can explain their use of the KEY LESSONS. We often find that they tell us things we never would have seen on our own. Excited that this year I am working with a High School teacher who will be doing these interviews.

    We also use this assessment tool in Social Studies and Science for Projects, where SYNTHESIS is the key component, but is not always obvious to the teacher looking at the finished product.

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.