This Complete Evidence-based Practice paper will describe the design and implementation of rubrics in a large-enrollment introduction to engineering course.
Timely and meaningful feedback is important to student learning but challenging to deliver in large enrollment classes. The use of rubrics is virtually mandatory to ensure clear communication of expectations and consistency in evaluation. We have implemented a rubric algorithm to address the time-based challenges of both rubric design and implementation.
Rubrics are used to clarify expectations for student work in advance, and also to evaluate submitted student work. The two main elements of a rubric are the criteria and the standards. The criteria (usually the “rows”) of a rubric are the characteristics of work that are evaluated, while the standards (usually the “columns”) establish levels of quality. The mechanics of rubric construction are explored in detail by Stevens and Levi. Most of their example rubrics have four to six criteria assessed against three standard levels. They suggest constructing these rubric starting with the “outside” columns and working inward – for each criterion, first establish the highest standard level, then the lowest standard level, and then fill in the middle level(s). This style of rubric can become more cumbersome to construct with more standards. It has been suggested to design rubrics with an even number of standards to avoid a “middle” option during evaluation.
We have developed the rubrics for our Engineering 101 course by focusing only on the two outermost columns of each rubric, describing only the highest quality level (which earns full credit, an A grade) and the minimum acceptable quality level (which earns credit roughly equivalent to a C or C- grade). The rest of the columns in the rubric are effectively left blank, but with a deliberate algorithm in mind that expands the rubric from having two columns to having six – two columns are between A and C-, which represent being closer to the A description than the C- or being closer to the C- description than the A, and two columns are on the other side of the C-, which represents an attempt that is below the minimum standard or no attempt at all. Rubric use follows the same general algorithm: the student work is first compared against the highest quality level, then if necessary the lowest level, and finally if necessary the work is determined to be closer to one of these levels or the other.
The final element of this project involves the training of our teaching assistants to obtain consistent evaluation of student work across all students in the class. This consists of a calibration exercise before the start of the semester, and regular spot-checking by lead teaching assistants during the semester.
In the full paper we will describe our rubric development and implementation process with examples directly from our introductory engineering course (ca. 750 student enrollment in two sections with 15 teaching assistants per section). We will present qualitative and quantitative evidence that the use of rubrics per our methodology results in high grading consistency and timely grade turn-around while also being relatively user-friendly for teaching assistants to implement. Quantitative evidence will include a comparison of inter-rater reliability for course assignments both pre and post-implementation of the streamlined rubric algorithm. We will also present feedback from teaching assistants on the ease-of-use of the new algorithm.
Are you a researcher? Would you like to cite this paper?
Visit the ASEE document repository at
for more tools and easy citations.