June 24, 2017
June 24, 2017
June 28, 2017
Educational Research and Methods
Evaluating Freshman Engineering Design Projects Using Adaptive Comparative Judgment
This evidence-based practice paper examines the use of a relatively new form of assessment, adaptive comparative judgment, and considers its reliability, validity, and feasibility in contrast to traditional assessment techniques. Engineering programs are often the home of multiple open-ended student design projects. The common method of assessment for most design projects is using a predetermined rubric to assign scores to student work (Pollitt, 2004). These assigned scores can be holistic in nature or based on micro-judgments that are summated to produce a macro-judgment of student performance (Pollitt, 2004; Kimbell, 2012). However, a problematic issue with the traditional scoring of student design work using rubrics is the low reliability when multiple graders assess student work (Pollitt, 2004, 2012). As a solution to this issue, Pollitt (2004) presents an alternative form assessment known as adaptive comparative judgment (ACJ). ACJ is a form of assessment that relies on comparisons of student work rather than rubrics. Bartholomew et al. (2016) explains this method as the process of showing judges a piece of work (e.g., essays, pictures, technical drawings, engineering notebooks, or design portfolios) from two different students or student groups with the direction to rate which piece of work is better. The judges are not asked to provide a grade for each piece of work but rather asked to provide a holistic decision as to which artifact is better based on their own professional opinion. In each round of judgment, each artifact is compared to another. Rounds of judgment are conducted until a sufficient reliability level is reached and a final rank-order for student work is obtained. While some may argue against the idea of comparing students to one another, Kimbell (2012) and Pollitt (2004) explain that any kind of assessment is essentially a comparison of one thing to another. As Pollitt states, “All judgments are relative. When we try to judge a performance against grade descriptors we are imagining or remembering other performances and comparing new performances to them” (2004, p. 6). The ACJ method of assessment has proven to be more reliable and valid than the traditional methods of assessment (Bartholomew et al., 2016; Kimbell 2012; Pollitt, 2004, 2006, 2012). The theoretical development of the ACJ assessment method has led to the formation of a grading engine by TAG Assessment titled CompareAssess. This product provides a platform for student work to be easily rated by multiple judges and algorithmically outputs the rank-order and standardized scores of relative work quality. This paper will examine the use of CompareAssess as a means for evaluating engineering design projects of undergraduate engineering students by using multiple judges to compare the design artifacts of 16 undergraduate engineering students. The authors will analyze the reliability and validity of this method when compared to the performance data of each student’s solution and the traditional rubric used to evaluate the project.
Strimel, G. J., & Bartholomew, S. R., & Jackson, A., & Grubbs, M., & Bates, D. G. M. (2017, June), Evaluating Freshman Engineering Design Projects Using Adaptive Comparative Judgment Paper presented at 2017 ASEE Annual Conference & Exposition, Columbus, Ohio. 10.18260/1-2--28301
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2017 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015