Asee peer logo

Rubrics Cubed: Tying Grades To Assessment To Reduce Faculty Workload

Download Paper |


2004 Annual Conference


Salt Lake City, Utah

Publication Date

June 20, 2004

Start Date

June 20, 2004

End Date

June 23, 2004



Conference Session

BME Assessment

Page Count


Page Numbers

9.1077.1 - 9.1077.6



Permanent URL

Download Count


Request a correction

Paper Authors

author page

David Lalush

author page

C. Frank Abrams

author page

Peter Mente

author page

Marian McCord

author page

H. Troy Nagle

author page

Elizabeth Loboa

author page

Susan Blanchard

Download Paper |

NOTE: The first page of text has been automatically extracted and included below in lieu of an abstract

Session 1609

Rubrics Cubed: Tying Grades to Assessment to Reduce Faculty Workloads

Susan M. Blanchard, Marian G. McCord, Peter L. Mente, David S. Lalush, C. Frank Abrams, Elizabeth G. Loboa, and H. Troy Nagle

Joint Department of Biomedical Engineering at UNC Chapel Hill and NC State

I. Background

Assessment of program outcomes is an important, but time-consuming, part of the ABET accreditation process for faculty. Many faculty members argue, “I grade; therefore, I assess.” The problem with using grades as assessment tools is that grades often cover material that represents more than one programmatic outcome.1, 2 In addition, there may be a great deal of variability in assignment of grades, depending on which faculty member does the grading. The purpose of this paper is to demonstrate that rubrics offer an excellent method for reducing faculty workload by providing a means to link grading and assessment.3

Faculty members of the Biomedical Engineering (BME) Courses and Curriculum Committee, which is also responsible for assessment, have worked as a team to develop several rubrics that are used by individual faculty to grade projects or other samples of student work in several BME courses. Different components of the rubrics can then be employed in various combinations to assess various programmatic outcomes. Each rubric is designed to result in the same grade and/or assessment evaluation independent of the faculty member who is doing the grading and/or assessing.

Our program has numbered objectives (1, 2, 3, 4), with alphabetically-labeled outcomes (a, b, c …). In the example below, the numbering scheme results from the fact that we are assessing our coverage of only three outcomes (1.c, 2.b, and 2.c) selected from our entire set of 15.

II. Combining assessment and grading

Students in BAE 381 (Human Physiology for Engineers) use Simulink® to reproduce mathematical models of a physiological system. The models that are reproduced are ones that have been published in peer-reviewed journals.4 These projects, which are completed in teams of 3-4 students and represent 15% of the course grade for each student, are used to assess three of our BME program’s outcomes. The BME objectives and outcomes addressed by the project with corresponding ABET 3a-3k outcomes in parentheses are:

Proceedings of the 2004 American Society for Engineering Education Annual Conference & Exposition Copyright © 2004, American Society for Engineering Education

Lalush, D., & Abrams, C. F., & Mente, P., & McCord, M., & Nagle, H. T., & Loboa, E., & Blanchard, S. (2004, June), Rubrics Cubed: Tying Grades To Assessment To Reduce Faculty Workload Paper presented at 2004 Annual Conference, Salt Lake City, Utah. 10.18260/1-2--14030

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2004 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015