Authentic Assessment Toolbox
created by Jon Mueller

What is Authentic Assessment? Why Do It? How Do You Do It?

 

Standards

Tasks

Rubrics

portfolios

Examples

Glossary

 

 

 

 


Workshop: Creating a Good Rubric


In the "workshops" sprinkled throughout this website I will attempt to capture (and model) the process I follow when assisting someone or some group in developing standards or authentic tasks or rubrics. For this workshop, I will begin with a particular authentic task an imaginary educator has developed for his seventh grade students, and we will work towards an appropriate rubric for assessing students' performance on the task. You can "play along at home" by imagining how you would respond to the educator or to me.


Somewhere in the back of my mind .... (hey, it's my workshop; I'll host it where I like!)


Educator: I developed a task I give my students to help determine if they have met the standard "Describe the rights articulated in the Bill of Rights." Specifically, I ask students to choose three of the rights. Then, I give them the choice of describing each right in one of three ways: 1) write an ad trying to persuade people to adopt this right, clearly identifying what the right covers; 2) describe to your cousin, who will be spending the summer with you, one of the rights he/she will have in your house; 3) explain to your classmate, who is having difficulty understanding this particular right, what is meant by this right as clearly as you can, using examples if possible. Students can use one method for describing all three rights, or one method for each right, or any combination of the methods with the rights.


Me: That sounds like a very engaging, meaningful way to address the standard.


Educator: Well, I did take your ridiculously good "Creating an authentic task" workshop.


Me: Apparently. However, you recognize that your task does not fully address the standard, right?


Educator: Yes, but you said that typically a single task cannot address an entire standard. So, I use this task in conjunction with other assessments. Also, I make sure that my students get to spend time with the other students' illustrations of the rights so they get more exposure to all the rights.


Me: Good. So, what can I help you with?


Educator: I have given this assignment a couple times now, and like the work the students have created so far, but I am really struggling with grading it. I'm not quite sure what I should be grading them on. And, I like giving my students the choice of the three different methods for describing the rights, but then I have three different assignments to grade. I probably should be grading them all the same, but I'm asking them to do different things in the three assignments. So, can I really grade them all the same? And some students are just more creative than other students. So, how do I grade for that?


Me: I've got just one word for you: Plastics.


Educator: What?


Me: Oh, sorry, I've got a lot of stuff rattling around here in the back of my mind. What I meant to say: Rubric.


Educator: A rubric? How would that help? And wouldn't I need three of them?


Me: Let's start with your first question. I imagine you are at least somewhat familiar with a rubric. You probably know that a rubric is a scoring scale used to judge performance along a set of criteria, or characteristics of good performance on a task. All rubrics are comprised of two components: 1) A set of criteria, and 2) levels of performance along which you will judge performance against the criteria.


Educator: Yes, I have created a couple of them.


Me: Good. Now on to your second question: How could a rubric help with your problem? A well-designed rubric can provide several benefits. First, if you have clearly identified the most essential criteria for a task, then you can be more confident that you are actually evaluating your students' performance on the dimensions that are critical to the task. Moreover, if the task(s) was carefully aligned with your standard(s), as it was in your case, then you can be more confident that you are evaluating your students' performance on the elements that are most critical to their learning, as identified in your standards.


Second, with a well-designed rubric you can more consistently apply the criteria to student work, reducing the likelihood that you are evaluating one student on one set of criteria and judging another student on different factors.


Educator: Yes, I often feel like I am emphasizing or responding to one thing in one student's work, but then I focus on something else for the next student. It's even worse when I give them choices of assignments. Won't three rubrics have the same problem?


Me: We'll come back to your question about needing three rubrics later. First, let's focus on how a rubric can help you. I mentioned a couple good reasons already; you can read more at this site. But let's look at the specific task you created.


If you want to know how to evaluate students on the task, you need to know what you are looking for. To answer that question it is always good to remind yourself of the standard or standards the task is addressing.


Educator: "Describe the rights articulated in the Bill of Rights."


Me: Right. So, what are you primarily looking for students to do?


Educator: I want them to tell me what the rights are in their own words.


Me: Why in their own words?


Educator: I don't just want my students to memorize the rights; I want them to understand them. I assume if my students can rephrase them in their own words then they likely have a fairly good understanding of the rights.


Me: So, do you want them to explain the rights? Apply the rights? Name the rights?


Educator: Well, it would be nice if they could do all that, but I just want them to be able to tell me what the rights are, in their own words.


Me: So, as you think about a rubric, it is important to pay attention to the verb you have chosen for the standard. "Describe" means exactly what you said you wanted. It doesn't mean explain, or apply, or name the rights. You are asking your students to describe the rights in your assignment, and that is exactly what you want to judge them on.


Educator: Okay, then what?


Me: After checking back to your standard, you are ready to look at the authentic task you assigned. You decided to offer your students some choice in the assignment. Why is that?


Educator: I don't want my students to feel like everything is done to them. I want them to feel a part of their education, to make some choices and feel at least some control.


Me: Well put. Self-determination theory would certainly second those thoughts, if … a theory could second things…


Anyway, that brings us to your third question: Do you need three rubrics for your three assignments? Although you have given your students some choice in the assignment, you have identified a single primary goal for each of the choices. As you just said, in each assignment you are looking for students to describe a right in their own words. Since you have the same goal for each assignment, you can apply a single rubric to all three.


Educator: So how do we get started on a single rubric?


Me: You've done this before. From what we have said so far, where do you think we start in building a rubric?


Educator: I would start by figuring out what I want to judge the students on for each assignment. I guess I want to figure out what characteristics of good performance those tasks have in common.


Me: Good. Again, you can never ask yourself enough times what you are primarily looking for.


Educator: I want them to describe a right in their own words.


Me: So, think about the tasks. What does a good description of the rights look like in each one? Think about what you most want to see in your students' work for each task. Go over in that corner and generate a list of criteria for the tasks, and come back to me when you think you have a possible list. And don't mess with my visual cortex over there; I kind of need that.


-----------------------------------------------------------------------------


Educator: I'm back. My list of criteria for the three tasks are pretty short, so I don't know if I'm missing something.


Me: Let's see what you came up with.

Educator: For the persuasive ad task I would look for these characteristics:

·         Was the ad persuasive?

·         Did it identify what the right covers?

For the cousin is visiting task I would look for this criterion:

·         Described the right to the cousin

For the classmate task I would look for these criteria:

·         Explained the right to the classmate

·         Used examples


Me: A good start. First, I notice you recognized that "criterion" is singular and "criteria" is the plural form. Now, as for your lists, you obviously used the task descriptions as a starting point in identifying your possible criteria. That is always a good place to start because you want to be evaluating your students on what you asked them to do. As long as your task is aligned with your standards, then your criteria should be aligned with your task.


We were going to try to create one common rubric for all three tasks. Is there anything in your sets of criteria that is common?


Educator: Well, I say I want students to identify what the right covers, describe the right to the cousin, and explain the right to the classmate. Those are getting at much the same thing, but I did use different verbs.

Me: You used different verbs in the task descriptions, which is okay as long as you make it clear that the purpose of the assignment is for students to demonstrate that they can describe the rights in their own words. The different tasks just provide different vehicles for doing so. But, in order to persuade or explain, students will need to describe the right clearly. So, you can return to that verb for your common criterion.

Educator: So, my criterion could be

·         Describe the right in your own words in an ad, or to your cousin, or to your classmate

Me: You're getting there. However, just like in writing standards or outcomes, we don't want to be too context specific. To see what I mean here, ask yourself: Are you interested in how well your students can describe the right primarily when they are talking to their cousin or classmate?


Educator: Well, no, I would want them to be able to describe the rights to anyone.


Me: So, how could you change your criterion?


Educator: I guess I could leave off the last part and say describe the right in your own words. Or, should I say accurately describe the right in your own words?


Me: That's what I was going to ask you to think about next. That statement, "Describe the right in your own words," covers virtually the entire goal you are after in this task. So, you want to ask yourself if that criterion is too broad or not? In other words, should it be broken into multiple criteria? To answer that, ask yourself what a good description would look like.


Educator: Well, a good description of a right would be accurate and it would be complete.


Me: Could a student's description be accurate but not complete? Could it be complete but not accurate?


Educator: Sure.


Me: So, those are two distinguishable characteristics, and they sound like important ones. By being sufficiently distinguishable you know the criteria do not overlap too much. If they did then you would be evaluating the same criterion twice. Most criteria in a rubric will be related to each other in some way because they are all connected to the same task. That's okay. You just want to make sure they are sufficiently distinct from each other so your evaluation of each one provides you with some unique information.


Educator: So, one criterion could be accurate description of right, and another criterion could be complete description of right.


Me: Very nice. Now, is there anything else you would like to see in a student response to the first task?


Educator: I would like everything my students write to be clearly stated and free of mechanical errors. I usually look for those in my students' written assignments. But that's not really the focus of this assignment, so should I include it in the rubric?


Me: You just said that you expect your students' work to be well written. So, by including those criteria in the rubric you are reinforcing that message.


Educator: But by having two criteria about the right and two about writing, aren't I saying they are equal to each other on this assignment?


Me: That is where weighting of the criteria in the rubric comes in.  We will get to that when we move on to the levels of performance.  So, you now have stated four criteria for the first task:

·         Accurate description of the right

·         Complete description of the right

·         Clearly stated

·         Free of mechanical errors

Do all of those apply to the second and/or third tasks?


Educator: In both tasks I am asking students to describe one of the rights, just to different audiences, so the same criteria should apply.


Me: It looks that way, but your description of the second task says, "describe to your cousin, who will be spending the summer with you, one of the rights he/she will have in your house." Couldn't the student pick a right that is not in the Bill of Rights? Is that okay?


Educator: No, I want them to choose from the Bill of Rights. I'll go back and fix that task description to make that clear.


Me: Good. We have your criteria for the rubric. Now we just need the levels of performance. You may be familiar with analytic and holistic rubrics. If not, you can read about them in the Rubrics chapter. This is a simple enough task where you could use a holistic rubric, but I would recommend an analytic one for the reasons given in the discussion at the above link.


Educator: An analytic rubric evaluates each criterion separately, right?


Me: That's right. So, we take each criterion one at a time and decide how many levels of performance are appropriate and what they should look like. The levels of performance are used to judge how well a student has met the criterion.


Educator: Let me guess: This is where you send me off to some fold in the back of your cortex to draft some levels of performance to go with my criteria?


Me: That's why it's called a workshop. Remember, don't touch the …

-----------------------------------------------------------------------------------------------

Educator: I'm back. Here's what I've got:

RUBRIC: Bill of Rights Assignment

Criteria

Poor

Average

Good

Excellent

Accurate description of the right

Not at all accurate

Mostly inaccurate

Mostly accurate

Accurate description

Complete description of the right

Not at all complete

Mostly incomplete

Mostly complete

Complete description

Clearly stated

Not at all clear

Not very clear

Mostly clear

Assignment is clearly stated

Free of mechanical errors

A lot of errors

Some errors

Only 1 or 2 errors

Free of mechanical errors


Me: This looks very good. You have identified the four criteria. You have described clearly distinct levels of performance for each of the criteria, as is done in an analytic rubric. Your descriptors, the statements inside each box which describes performance for a criterion at a particular level, are easy to understand.


Educator: So, I'm done?


Me: (Evil laugh echoing inside my head) Not quite so quick. Let me just ask you a few questions. First, as I will likely mention again, it is useful to imagine a variety of possible student performances on the task to see if the rubric makes sense as it is designed. For example, some of the rights, like the first one, have multiple parts to it, right?


Educator: Yes.


Me: So, according to your rubric, where would you place a student who accurately described only one of the rights mentioned in the first amendment? Is it plausible that a student might say the first amendment is only about the freedom of speech?


Educator: Oh yes, I've got that before. Well, on my rubric that student would be rated as mostly incomplete and… I guess an accurate description.


Me: Should the student get an "excellent" for accurate if only one part is included?


Educator: Probably not. I can't say he is accurately describing the whole right. But I can't give him a "fair" or "poor" in my rubric because it is not inaccurate. It doesn't seem like my rubric covers that problem. But it's not like I am missing any criteria. So, what do I do?


Me: We said that "accurate" and "complete" could be distinguished from one another in this task, and they can be considered separately. However, these are two criteria that cannot easily be scored separately. How accurately a student describes the right depends upon how complete the answer is as well. So, you likely need to judge the two criteria together. We probably would not have noticed this if we hadn't first considered possible student performances on the rubric. That is harder to do with a completely new task, but it is still a worthwhile step in the process of creating a good rubric.


Educator: If we are going to combine the two criteria, could we change the rubric like this:

Accurate and complete description

Not at all accurate or complete

Mostly inaccurate and incomplete description

Mostly accurate and complete description

Accurate and complete description


Me: Those are good descriptors. If you are going to apply that criterion to any of the rights your students select, it would be difficult to be more precise than "mostly." Now, back to considering possible student performances, where along your levels of performance would you place a student who accurately described freedom of speech but mentioned nothing else?


Educator: I would say that student's answer was mostly an inaccurate and incomplete description.


Me: And what about a student who included all the parts of the right but only described half of them accurately?


Educator: That's a little trickier, but I think I would put that response in the "mostly accurate and complete description" category.


Me: And in which category would you place an assignment in which the student has primarily restated the right in the same words used in the original document?


Educator: My students know that when I say to describe something I mean for them to put it in their own words.


Me: It's good that you have consistently emphasized that and made it clear. However, you have to ask yourself, will you still get some assignments that are partially or completely copying the words from the right itself?


Educator: Yes, that is going happen. So, do I need to include "in their own words" along with accurate and complete?


Me: That get's rather complicated for a single criterion. I have different suggestion we will get to in a minute. For now, let me say that no set of descriptors will perfectly cover all the possible student performances. You just want to get to the point of feeling comfortable placing the most likely responses in one of the levels. You seem to be there with the "accurate and complete" criterion, so let's look at the other criteria.


As for the "clearly stated" criterion, you won't be able to describe degrees of clarity with much more clarity, unless you added more levels of performance. Particularly for such a brief assignment, it does not make sense to make very fine distinctions on clarity if that were even possible. The rubric is not just for judging purposes, though. So, you also want to ask yourself if telling a student that her writing is "mostly clear" on this assignment will give her a good sense of where her writing stands compared to a rating of "not very clear" or "clearly stated."


Educator: I think it does. Also, I may include a little more specific feedback in written comments or conversation.


Me: Good. And I think here is a good place to consider the concern about students stating the right in their own words. When you ask them to write clearly, I suspect you are also implying that it should be in their own words. So, is it possible to combine clarity and putting the right in their own words in one criterion?


Educator: Yes. I could state the descriptors like this

 

Poor

Average

Good

       Excellent

Clearly stated in own words

Not at all clear AND not in own words

Not very clear OR not in own words

Mostly clear and in own words

Clearly stated and in own words


I described the "good" level as "mostly clear and in own words" because it should be easy for students to avoid copying the words of the right. It is not easy for many of them to put it in their own words, but they do know not to copy it. So, even if the assignment is mostly clear I still would not describe it as good. That would be just "average." Does that make sense?


Me: Definitely, you are asking yourself the right questions. So, let's move on to the last criterion. Tell me why you chose the descriptors listed for the free of mechanical errors criterion.


Educator: Since this is a brief assignment, and because I have given a lot of attention to proper mechanics, students should be able to produce an assignment free of mechanical errors. So, I think that is realistic for the top level of performance. Similarly, I would expect good mechanics to be limited to just one or two errors. Then, more errors would be worse, so I chose "some" and "a lot" for the last two categories.


Me: As you said, this is a brief assignment, so it is possible to quantify expectations more easily than for a longer written product. It sounds as if it is reasonable to describe excellent work as free of errors, and good work as containing only 1 or 2 errors. Why wouldn't it then be possible to quantify the lower levels?


Educator: Well, there is the potential for a lot of errors, and I don't want to have a different level for every few errors up to 20 or more, do I? That would look strange on a rubric.


Me: Just as you have lumped 1 or 2 errors together into one level, it is reasonable to do that with other numbers of errors. I would suggest starting at the lower end: What would you consider an unacceptable number of errors for most of your students?


Educator: For this assignment … probably more than five.


Me: So, that can be your lower level. Do you have a problem lumping an assignment with six errors with one that has twenty?


Educator: No, not for this assignment. So, that would leave me with 3-5 errors for the "average" level. Those four categories seem reasonable for what I am expecting on this assignment.

Free of mechanical errors

More than 5 errors

3-5 errors

Only 1 or 2 errors

Free of mechanical errors

So, now I'm done?


Me: Just a couple more questions. I notice you have four levels of performance for each of your criterion. Why is that?


Educator: Well, I wanted to make sure I was spreading the students out, and I figured I should put descriptors in all the levels. Shouldn't I have the same number for each criterion?


Me: In an analytic rubric, performance on each criterion is evaluated separately. Thus, what a particular criterion's levels of performance look like and how many there are should not be driven by the method of judging the other criteria. How many levels of performance are linked to a criterion is dependent upon what makes sense for that criterion on that task.


Let's start with your last criterion - free of mechanical errors.


Educator: I like the four levels we came up with before.


Me: Okay. That does not mean we need four levels for the other criteria. Let's look at them. First, consider the nature of the task. You described it as a brief assignment. Is it a very difficult assignment?


Educator: No, not particularly.


Me: So, would you say your students would be doing a poor job if their description of a right was mostly inaccurate and incomplete?


Educator: Yes, I guess so. Then what would I put in the "average" level?


Me: Why have another level? Keep things simple. It is more difficult to make finer distinctions. For a brief assignment like this one, you will have an easier time placing students in three levels for their descriptions than in four levels. Ask yourself: Does "mostly inaccurate and incomplete description," "mostly accurate and complete description," and "accurate and complete description" reasonably cover the types of performances you have seen on this assignment (or would expect to see on a new assignment)?


Educator: I see what you mean. Yes, those levels would work. I guess the same would be true for the "clearly stated" criterion. "Not very clear OR not in own words" would not be acceptable, so that can be my lowest level. Are you sure I don't have to have the same number of levels? It fills in the little table out so nicely.


Me: Believe me, there is no rule written somewhere that says a rubric must contain equal number of levels of performance, and there is no research that suggests an advantage to such a rubric. You need to be flexible in creating a good rubric. Common sense should dictate how you set up your rubric. Ask: Is this reasonable given my goals?
Let's see what you have created so far:

Criteria

Poor

Average

Good

Excellent

Accurate and complete description

Mostly inaccurate and incomplete description

 

Mostly accurate and complete description

Accurate and complete description

Clearly stated in own words

Not very clear OR not in own words

 

Mostly clear and in own words

Clearly stated and in own words

Free of mechanical errors

More than 5 errors

3-5 errors

Only 1 or 2 errors

Free of mechanical errors


Educator: Much better than where I started. But we aren't done yet, are we?


Me: Almost, I promise. Why did you select poor, average, good, and excellent as labels for the levels of performance in your rubric?


Educator: Those are labels I often use in describing student work or what I am looking for in their work, and I wanted to use the same language.


Me: Good, consistency in your language is critical. What does "average" mean in your rubric?


Educator: It means the student is somewhere in the middle, adequate, but not that good.


Me: What if every student in your class did a "good" job on an assignment? What would the average be?


Educator: I guess it would be "good."


Me: So, would you score them as "average" or as "good" in your rubric? The problem is that you are mixing two kinds of scales in your labels. "Average" and "above average" are normative terms that describe performance relative to other performance. On the other hand, "poor," "good," and "excellent" are labels that describe how well someone has met a set of criteria. Given that the authentic assessments are typically criterion-referenced, it makes sense to apply criterion-based labels if you choose to include labels in your rubric.


Educator: I don't need labels for the levels of performance?


Me: Flexibility, flexibility, flexibility. Very few things are required in rubrics. Include what makes sense. Only assign labels if they add significant, useful information.


Educator: Since my students often hear me use terms like excellent and good to describe the work I am looking for, I would like to apply those to this rubric. But what do I put in the place of average. I still have four levels for the mechanical errors criterion.


Me: In describing what you meant by "average" before you used the term "adequate." Does that describe 3-5 errors on this task? Is it sufficiently distinguishable from "poor" and "good"?


Educator: That works. Getting close. What else do I need to consider after I have my criteria and levels of performance?


Me: You earlier asked, "But by having two criteria about the right and two about writing, aren't I saying they are equal to each other on this assignment?" Have you given any thought to the scoring of this assignment? How many points was it worth when you last assigned it?


Educator: Each of the three assignments is worth 10 points. So, what if I say the rights criterion is worth four points and the other two are worth three points each, since they aren't as important? For example,

Criteria

Poor

Adequate

Good

Excellent

Accurate and complete description

Mostly inaccurate and incomplete description
(1)

 

Mostly accurate and complete description
(3)

Accurate and complete description
        (4)

Clearly stated in own words

Not very clear OR not in own words
(1)

 

Mostly clear and in own words
(2)

Clearly stated and in own words
        (3)

Free of mechanical errors

More than 5 errors

(0)

3-5 errors

(1)

Only 1 or 2 errors
(2)

Free of mechanical errors
        (3)

 

Me: We have been at this a while, so let me address a few points about your scoring at once. Points do not need to be assigned at equal intervals across the levels. For example, for your first criterion, you don't have to assign the points as 4, 3, 2. They can be 4, 3, 1 as you listed, or 4, 2, 1, or 4, 3, 0. Be flexible. Also, you chose to assign one point for poor for one criterion, and zero points for poor for another criterion. That is perfectly fine. Don't be afraid to assign zero points if you do not believe the work deserves any credit. Another way to address the distribution of points is to include a range of possible points within a level. For example, for your first criterion, you could assign 4 points for excellent, 3 points for good, and 0-2 points for poor, assigning 1 or 2 points for a little more accurate or complete description.
Educator: I can do that?


Me: There are very few things you cannot do in a rubric. Just ask…


Educator: …Does it make sense? Got it. What do you think of me assigning four points to the description and three points for the other criteria?


Me: In other words, you believe one criterion should be weighted more than the others because it is more important to the assignment. But how do you know if the weighting is appropriate?


Educator: Wait, I think you addressed this one before…We should…consider possible student performances. Yes! Right?


Me: Good, you can repeat back what I said. Now, put it in your own words. How do we do that?


Educator: Well, for example, what if Student A gave an accurate and complete description of the right but it wasn't very clear and there were 3-5 mechanical errors? That student would receive a grade of 6 out of 10. That seems fairly reasonable. Student A got most of the points because she did demonstrate understanding of the right.


Me: On the other hand, what if Student B gave a completely inaccurate description of the right, but his assignment was clearly written and free of mechanical errors?


Educator: Then Student B would receive a … 7 out of 10. And he completely missed the assignment. That's not good.


Me: What does that suggest?


Educator: I need to weight the description criterion even more heavily. If a student does a poor job on the main component of the assignment, he shouldn't receive a good grade for it. So, let me play around with the numbers for a minute….

RUBRIC: Bill of Rights Assignment

Criteria

Poor

Adequate

Good

Excellent

Accurate and complete description

Mostly inaccurate and incomplete description
(0-3)

 

Mostly accurate and complete description
(4-5)

Accurate and complete description
        (6)

Clearly stated in own words

Not very clear OR not in own words
(0)

 

Mostly clear and in own words
(1)

Clearly stated and in own words
        (2)

Free of mechanical errors

More than 5 errors

(0)

3-5 errors

(1)

Only 1 or 2 errors
(1.5)

Free of mechanical errors
        (2)

 

I was going to ask "Can I do that?" again, but I think I have learned your answer. It makes sense to me.


Me: Good. I like it. Only one more question. You had asked if we could create one rubric to apply to all three assignments. Does this rubric work for you for all three versions of the authentic assignment you created?


Educator: Persuasive ad version…yes. Describe a right in your house to your cousin version…yes. Explain to your classmate version…well, I said they could possibly use examples. But what I really want to see is a clear and accurate description, whether students use examples or not. So…yes. Is the rubric ready for action?


Me: You will never create a perfect rubric. So, when you are comfortable with it then it is ready to go. Applying the rubric will give you more feedback on it. You can then tweak it where necessary, such as fleshing out the descriptors a bit more. Perhaps, in the future, you will be able to articulate more specifically what you mean by "mostly clear." But for now… ready for action.


Educator: Wow. That was a lot of work. Was it really worth it for one rubric?


Me: Looks can be deceiving. First, yes, that was a lot of work. It is not easy to create a good rubric, even for a briefer task. However, second, consider the alternatives. How much work would it be to create and apply a quickly and poorly constructed rubric? It would take you longer to assign grades to each student product because you would not have a clear, well thought out rubric to guide you. Furthermore, you would have to consider the time you would spend revising your rubric, perhaps multiple times, because you did not do it right the first time. Or, imagine you did not use a rubric at all for this assignment. You would be more likely to inconsistently apply your criteria, and probably even apply different criteria to different assignments, and perhaps apply different weights to different student work for the same assignment. You would also likely spend more time explaining to your students exactly what you were looking for in the assignment because you had not shared your well thought out rubric ahead of time.


Third, you actually did not create just one rubric. Do you evaluate students on their writing for other assignments?


Educator: Yes, I do for virtually all of the written ones.


Me: Then in this rubric you already identified criteria, levels of performance, and descriptors that you can use in rubrics for those other written assignments. Much of the language you developed in this rubric can, and for consistency sake, should be applied to your other rubrics where relevant. In other words, you just accomplished a lot here.


Educator: I feel better. Thanks. Now, how do I get out of here?


Me: Follow the neural pathway on your left or right until you arrive at the ear canal. That should take you to the exit. In case of turbulence caused by an unfortunate blow to my skull, stars circling above my head should light your way out. And thank you for assessing with us…

------------------------------------------------------------------------------

Other workshops:

Workshop: Writing a good standard

Workshop: Creating an authentic task

 

 


 
Home | What is it? | Why do it? | How do you do it? | Standards | Tasks | Rubrics| Examples | Glossary

Copyright 2010, Jon Mueller. Professor of Psychology, North Central College, Naperville, IL. Comments, questions or suggestions about this website should be sent to the author, Jon Mueller, at jfmueller@noctrl.edu.