There are many helpful sources on rubrics. For example, Waterloo's source is here. We do not plan on repeating what can be found in these resources. Note, we have a separate note for how to use rubrics in open-ended projects where a peer-comparison is being done. This note is about the more traditional situation where there is no explicit peer comparison (peer comparision not being the same as peer evaluation).
Using a definition definition from Carnegie Mellon's teaching center, a rubric is a scoring tool that explicitly describes the instructor’s performance expectations for an assignment or piece of work. A rubric identifies: i) criteria: the aspects of performance (e.g., argument, evidence, clarity) that will be assessed, ii) descriptors: the characteristics associated with each dimension (e.g., argument is demonstrable and original, evidence is diverse and compelling), and iii) performance levels: a rating scale that identifies students’ level of mastery within each criterion.
It is often recommended to provide a copy of the rubric when the assessment is assigned. We agree and disagree. We agree when the intent is for the students to think and behave the way you want them to do, you have set the expectations, and it is their job to fit within them. This is where you have removed a lot of the decision making and creative problem-solving from the assessment for a sound pedagogical reason and not to just make marking easier. And, this is fine in some cases. The students will study the rubric, decode it, and try to follow the criteria as closely as possible. Note, the expectations need to be voiced and communicated in such a way that the students understand what is expected and that the expectations are within reach of the average student; to get an average assessment.
If you are giving the rubric out in advance, you still need to consider outliers and outperformers when setting expectations and describing the rubric if you really want to know what the students are capable of and what they know. If the rubric does not accommodate the outliers and is based on conformance and average expectations for the average student, or strict laws of conformity, it is flawed. We prefer rubrics that accommodate outliers by having criteria that allows a certain degree of freedom allowing the students to pick the cognitive difficulty. Similar to a sporting event where there are degrees of difficulty, an easy task is expected to be completed perfectly, flawlessly, and a more difficult routine has some latitude for certain types of risks and minor failures.
If the course's learning objectives are to work on and develop the cognitive skills related to Bloom's taxonomy, giving a rubric out when the assessment is assigned is counter-productive. You want to see and assess the comprehension, the ability to apply, analyse, synthesize and evaluate. Similar to the note on peer comparision rubrics (), it is important to make the rubric afresh each year (or at least tweak it) based on what the students have learned and have demonstrated. You want to point out what could have been done and what should not have been done. This is done in a framework of submit, assess, debrief/discuss, and then a follow up assessment to see what was learned; arising from the feedback and debriefing.
It is important to discuss the process, interpretation, and strategy for doing the assessment, and these points can also be discussed in the rubric. If peer comparisons are not used, you are using a gold (or bronze) standard for your expectations. These can change assessment by assessment, year by year. It is likely that the old rubrics will leak to future years, but it should be noted at the start of term that the old rubrics are revised each year; caveat emptor. By doing the rubric after the fact, after reviewing submissions, it is often easier to deal with the outliers, the outperformers, the nonconformity. You can usually deal with variety in hindsight. Usually is easy to do.
Another caveat. It does take time and extra effort to do this type of marking in one regard; the set up is usually more intense. However, we have found that after the set up, the marking per item is every efficient and effective as it reflects more what was done and not what you expected to be done. In open-ended projects and assessments, it is not what you expect them to do per se, it is how they use what has been taught, using reasonable judgement to hit the stated 'goal'. You cannot assess these type of learning with simple rubrics.