New Orleans, Louisiana
June 26, 2016
June 26, 2016
August 28, 2016
Design in Engineering Education
This paper describes the process of testing and refining modular rubric rows developed for the assessment of engineering design activities. This is one component of a larger project to develop universal analytic rubrics for valid and reliable competency assessment across different academic disciplines and years of study. The project is being undertaken by researchers based in the faculty of applied science and engineering at a large research-intensive public university. From January 2014 to June 2015, we defined and validated indicators (criteria) for design and communication learning outcomes, then created descriptors for each indicator to discriminate between four levels of performance: Fails, Below, Meets, and Exceeds graduate expectations. From this bank of modular rubric items, applicable rows can be selected and compiled to produce a rubric tailored to a particular assessment activity. Here we discuss these rubrics within the larger context of the assessment of engineering design. We tested draft rubrics in focus group sessions with assessors. We followed the testing with structured discussions to elicit feedback on the quality and usability of these rubrics, and to investigate how the assessors interpreted the language used in the indicators and descriptors. We asked participants to identify indicators they believed were irrelevant, redundant, or missing from the rubric. We also asked them to identify and discuss indicators and descriptors that were confusing. Finally, we asked them what changes they would recommend and what training materials they would find useful when using rubrics of this design. By transcribing, coding, and analyzing recordings of these discussions, we identified rubric content that is unclear, ambiguous, or confusing to assessors and synthesized their recommendations for making the rubrics more usable. While some rubric rows received similar criticism from most participants, we identified many differences in assessors' rubric design preferences and in how they apply rubrics to evaluate student work. For example, some participants stated that the level of specificity in the indicators and descriptors made it more difficult to select a performance level. This feedback is surprising as it seems to contradict claims in the literature that providing more specific descriptions of quality makes rating more consistent. It also emerged that assessors have different conceptions of engineering design and the design process, and are confused when presented with unfamiliar terminology. We will be applying this information to refine the rubrics and develop accompanying training materials. The improved rubric items will be evaluated for inter-rater reliability and deployed in academic courses. We will also perform user testing with undergraduate students.
Dawe, N., & Romkey, L., & McCahan, S., & Lesmond, G. (2016, June), User Testing with Assessors to Develop Universal Rubric Rows for Assessing Engineering Design Paper presented at 2016 ASEE Annual Conference & Exposition, New Orleans, Louisiana. 10.18260/p.27118
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2016 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015