Salt Lake City, Utah
June 20, 2004
June 20, 2004
June 23, 2004
9.253.1 - 9.253.6
Better Understanding through Writing: Investigating Calibrated Peer Review ™ John C. Wise, Seong Kim The Pennsylvania State University
Calibrated Peer Review (CPR) was initially developed by UCLA in the 1990s as a way to use technology to increase the opportunities for student writing assignments.1 Writing about a concept has long been seen as one of the best ways to demonstrate student understanding. Unfortunately, it has always been true that more student writing assignments yields weekends lost in a sea of paper and grading schemes that ebb and flow in their accuracy. CPR applies the process of scientific peer review to education. Students perform research (study), write about their “findings”, submit it for blind review (and act as reviewers themselves), and finally use peer feedback to improve their understanding. All of this is possible without intervention from the instructor using CPR.
This paper reports on part of a continuing study on the utility of CPR in engineering education. In this instance, CPR was introduced into a writing-intensive laboratory course in chemical engineering. Students worked in teams, but were required to submit individually-crafted executive summaries using the CPR system. Assessment was based on instructor inspection of student work related to previous semesters and a survey administered to the students.
CPR was originally developed as a writing aid in large enrollment chemistry courses, but is now being used for various disciplines and subjects at over 300 schools and universities.1 The underlying theory is based on the scientific writing process. Students research a topic, write an essay, report, or similar output, and then submit their work for peer review. They also participate as reviewers themselves. The final stage requires the students to review their own work after having seen their peers’ writing. The process is illustrated in Figure 1.
While several other web-based peer review tools have been designed2,3, the “calibration” stage is unique to CPR. When the assignment is created, the instructor/author must develop three examples of student work: one excellent, one average, and one poor. The instructor/author then creates a scoring scheme (rubric) and rates each of the sample texts. After a student has completed his or her own writing and prior to being allowed to rate any fellow students, the student is presented with these examples and asked to rate them. This rating is compared to the rating assigned by the instructor/author. The similarity between the student and instructor’s ratings on the same text determines that student’s Reviewer Competency Index or RCI. The RCI ranges from a low of 1 to a high of 6 and is used to weight the rating given by any particular
Proceedings of the 2004 American Society for Engineering Education Annual Conference & Exposition Copyright © 2004, American Society for Engineering Education
Wise, J. (2004, June), Better Understanding Through Writing: Calibrated Peer Review (Tm) Paper presented at 2004 Annual Conference, Salt Lake City, Utah. 10.18260/1-2--13497
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2004 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015