Albuquerque, New Mexico
June 24, 2001
June 24, 2001
June 27, 2001
6.454.1 - 6.454.9
Enhancing Scoring Reliability in Mid-Program Assessment of Design Denny Davis, Michael Trevisan, Larry McKenzie Washington State University Steve Beyerlein University of Idaho
For the past six years, faculty across Washington State have worked to define and measure design competencies for the first two years of engineering and engineering technology degree programs. A three part performance-based assessment to assess student design capabilities at the mid-program level was developed for this purpose. This paper presents a pilot reliability study designed to enhance the consistency of scoring the three-part assessment. Toward this end, three raters participated in a multi-step procedure which included initial scoring of student work, reconciliation of differences among raters, revision of scoring criteria, and the development of decision rules to deal with student work difficult to score within the existing scoring criteria. Intraclass correlation coefficients were computed before and after this process, showing marked improvement of inter-rater reliability. The revised scoring criteria and decision rules offer potential for faculty to produce reliable scores for student design performance on constructed response items and tasks, a prerequisite to sound program decision making.
The design capabilities among graduates from undergraduate engineering education programs continue to be a concern voiced by industry representatives. The need to improve design capabilities is further highlighted and motivated by requiring programs seeking accreditation through ABET Engineering Criteria 2000, to develop assessment competencies and a means to assess student design achievement1. In turn, this data is to be used as program feedback and when necessary, revisions to the program are to be made.
A prerequisite to effective use of assessments and sound programmatic decision making from assessment data is that achievement scores be obtained in a consistent manner. Consistent assessment data is referred to as “reliability” in the assessment literature, and signifies the extent to which the assessment is measuring achievement without error2. Despite the surge of interest in assessment processes within the engineering education literature in recent years, little discussion can be found regarding the quality of assessment data, such as reliability. The purpose of this paper is to illustrate one example for achieving consistent, reliable engineering design assessment results. The findings from this pilot study are preliminary. Multiple studies with all components of the assessment are now underway and may have ramifications for the nature and scope of the assessment. This paper provides a method for obtaining reliable data from a multi- faceted design assessment.
Proceedings of the 2001 American Society for Engineering Educational Annual Conference & Exposition Copyright 2001, American Society for Engineering Education
Davis, D., & McKenzie, L., & Beyerlein, S., & Trevisan, M. (2001, June), Enhancing Scoring Reliability In Mid Program Assessment Of Design Paper presented at 2001 Annual Conference, Albuquerque, New Mexico. 10.18260/1-2--9218
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2001 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015