Portland, Oregon
June 12, 2005
June 12, 2005
June 15, 2005
2153-5965
5
10.430.1 - 10.430.5
10.18260/1-2--14413
https://peer.asee.org/14413
626
Session 1526
Developing a Peer Evaluation Instrument that is Simple, Reliable, and Valid
Matthew W. Ohland, Misty L. Loughry, Rufus L. Carter, Lisa G. Bullard, Richard M. Felder, Cynthia J. Finelli, Richard A. Layton, and Douglas G. Schmucker
General Engineering, Management, Clemson University / Institutional Research and Assessment, Marymount University / Chemical and Biomolecular Engineering, North Carolina State University / Center for Research on Learning and Teaching-North, University of Michigan / Mechanical Engineering, Rose-Hulman Institute of Technology / Civil Engineering, Western Kentucky University
Abstract
A multi-university research team is working to design a peer evaluation instrument for cooperative learning teams that is simple, reliable, and valid. In this work, an overview of the process of developing behaviorally anchored rating scales (BARS) will be presented, including the establishment of a theoretical basis for the instrument and a description of the extensive classroom testing of the draft instrument conducted during fall 2004.
Introducing the draft instrument to the engineering education community through exposure in the NSF grantees’ poster session is expected both to improve the validity of the scale itself through the feedback we receive and to accelerate the dissemination of the instrument.
Introduction
This project and its goals were introduced in earlier work.1 ABET’s requirement that engineering graduates have an ability to function on multi-disciplinary teams2 has driven an expanded use of cooperative learning in engineering curricula.3 A fundamental tenet of cooperative learning is holding individualThis will be achieved by improving individual accountability by adjusting team members accountable for fulfilling their responsibilities to the team. An effective and increasingly common way of addressing this tenet is to have team members rate one another’s performance and to use the ratings to adjust the team assignment grades for individual performance. The challenge is to devise a rating system that is fair, simple to administer, reliable, and valid.
Our prior experience was based on a peer rating system developed by Robert Brown of the Royal Melbourne Institute of Technology.4,5 Brown’s system is a single-item form of behaviorally anchored rating scale (BARS), an instrument that aims to improve validity by reducing the subjectivity of ratings by providing verbal descriptions to anchor the points of the scale.6 The BARS was one of a variety of rating scales studied between 1960 and 1980. As rating-scale researchers became convinced that performance ratings were robust to changes in rating scale format,7 research into new rating formats waned, and has remained slow in the past 15 years.8
Proceedings of the 2005 American Society for Engineering Education Annual Conference & Exposition Copyright © 2005, American Society for Engineering Education
Carter, R., & Bullard, L. F., & Schmucker, D. G., & Loughry, M., & Felder, R., & Ohland, M., & Layton, R., & Finelli, C. (2005, June), Developing A Peer Evaluation Instrument That Is Simple, Reliable, And Valid Paper presented at 2005 Annual Conference, Portland, Oregon. 10.18260/1-2--14413
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2005 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015