Asee peer logo

Understanding Grader Reliability through the Lens of Cognitive Modeling

Download Paper |


2019 ASEE Annual Conference & Exposition


Tampa, Florida

Publication Date

June 15, 2019

Start Date

June 15, 2019

End Date

June 19, 2019

Conference Session

ERM Technical Session 5: Assessment

Tagged Division

Educational Research and Methods

Page Count




Permanent URL

Download Count


Request a correction

Paper Authors


Nathan M. Hicks Purdue University, West Lafayette Orcid 16x16

visit author page

Nathan M. Hicks is a Ph.D. student in Engineering Education at Purdue University. He received his B.S. and M.S. degrees in Materials Science and Engineering at the University of Florida and taught high school math and science for three years.

visit author page


Kerrie A. Douglas Purdue University, West Lafayette Orcid 16x16

visit author page

Dr. Douglas is an Assistant Professor in the Purdue School of Engineering Education. Her research is focused on improving methods of assessment in large learning environments to foster high-quality learning opportunities. Additionally, she studies techniques to validate findings from machine-generated educational data.

visit author page


Heidi A. Diefes-Dux University of Nebraska, Lincoln Orcid 16x16

visit author page

Heidi A. Diefes-Dux is a Professor in Biological Systems Engineering at the University of Nebraska - Lincoln. She received her B.S. and M.S. in Food Science from Cornell University and her Ph.D. in Food Process Engineering from the Department of Agricultural and Biological Engineering at Purdue University. She was an inaugural faculty member of the School of Engineering Education at Purdue University. Her research focuses on the development, implementation, and assessment of modeling and design activities with authentic engineering contexts. She also focuses on the implementation of learning objective-based grading and teaching assistant training.

visit author page

Download Paper |


This Work-in-Progress paper presents the development of a grading process model for open-ended engineering problems with the assistance of rubrics across many graders. As many problems in engineering are open-ended, best practices dictate the use of authentic open-ended performance tasks for assessment in engineering courses. Unfortunately, evaluation of open-ended work inherently requires at least some degree of subjective judgment. However, for assessment to be considered valid, it is imperative that evaluation is fair, which includes holding all students to the same, transparently communicated expectations and standards. In large, multi-section courses, which are common in first-year engineering programs, achieving this fairness of assessment evaluation across all students becomes a significant challenge. Recognizing that the grading process consists of humans interacting with objects and making decisions, this study treats the grading process as a complex socio-technical system and describes the process of developing a model using the Functional Resonance Analysis Method (FRAM). The paper describes how the FRAM will be applied to identify sources that contribute to variability of system performance and possible corresponding control mechanisms.

Hicks, N. M., & Douglas, K. A., & Diefes-Dux, H. A. (2019, June), Understanding Grader Reliability through the Lens of Cognitive Modeling Paper presented at 2019 ASEE Annual Conference & Exposition , Tampa, Florida. 10.18260/1-2--33477

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2019 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015