Salt Lake City, Utah
June 20, 2004
June 20, 2004
June 23, 2004
9.89.1 - 9.89.11
A PROTOCOL FOR PEER REVIEW OF TEACHING Rebecca Brent/Richard M. Felder Education Designs, Inc./North Carolina State University
A peer review protocol that serves both formative and summative functions has been implemented at North Carolina State University. For summative evaluation, two or more reviewers use standardized checklists to independently rate instructional materials (syllabus, learning objectives, assignments, tests, and other items) and at least two class observations, and then reconcile their ratings. For formative evaluation, only one rater completes the forms and the results are shared only with the faculty member being rated rather than being used as part of his/her overall teaching performance evaluation. Pilot test results of the summative protocol show a high level of inter-rater reliability. This paper presents a brief overview of the reasons for including peer review in teaching performance evaluation and the problems with the way it has traditionally been done, describes and discusses the protocol, summarizes the pilot test results, and demonstrates how the use of the protocol can minimize or eliminate many common concerns about peer review of teaching.
Mounting pressures on engineering schools to improve the quality of their instructional programs have been coming from industry, legislatures, governing boards, and ABET. An added impetus for improving engineering instruction is a growing competition for a shrinking pool of qualified students. If enrollment falls below a critical mass, the loss in revenues from tuition and other funds tied to enrollment could place many engineering schools in serious economic jeopardy.
A prerequisite to improving teaching is having an effective way to evaluate it. Standard references on the subject all agree that the best way to get a valid summative evaluation of teaching is to base it on a portfolio containing assessment data from multiple sources—ratings from students, peers, and administrators, self-ratings, and learning outcomes—that reflect on every aspect of teaching including course design, classroom instruction, assessment of learning, advising, and mentoring.1–4 A schematic diagram of a comprehensive evaluation system that incorporates these elements is shown in Figure 1.5 This paper deals with the peer review component of the system. Other references may be consulted for information regarding student ratings of teaching6–9 and teaching portfolios.4,10–12
Why, How, and How Not to Do Peer Review
For the last half century, the standard way to evaluate teaching has been to collect course- end student rating forms and compile the results. While student ratings have considerable validity,6 they also have limitations. Among other things, students are not qualified to evaluate
Proceedings of the 2004 American Society for Engineering Education Annual Conference & Exposition Copyright © 2004, American Society for Engineering Education
Felder, R. (2004, June), A Protocol For Peer Review Of Teaching Paper presented at 2004 Annual Conference, Salt Lake City, Utah. 10.18260/1-2--13897
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2004 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015