Asee peer logo

Automatic Quality Assessment for Peer Reviews of Student Work

Download Paper |

Conference

2012 ASEE Annual Conference & Exposition

Location

San Antonio, Texas

Publication Date

June 10, 2012

Start Date

June 10, 2012

End Date

June 13, 2012

ISSN

2153-5965

Conference Session

Emerging Information Technologies

Tagged Division

Computing & Information Technology

Page Count

12

Page Numbers

25.245.1 - 25.245.12

DOI

10.18260/1-2--21005

Permanent URL

https://peer.asee.org/21005

Download Count

388

Request a correction

Paper Authors

author page

Lakshmi Ramachandran North Carolina State University

biography

Edward F. Gehringer North Carolina State University

visit author page

Ed Gehringer is an Associate Professor in the departments of Computer Science and Electrical and Computer Engineering at North Carolina State University. He received his Ph.D. from Purdue University and has also taught at Carnegie Mellon University and Monash University in Australia. His research interests lie mainly in computer-supported cooperative learning.

visit author page

Download Paper |

Abstract

Automatic Quality Assessment for Peer Reviews of Student WorkReviews are text-based feedback provided by reviewers to the authors of asubmission. Since reviews play a crucial role in providing feedback to peoplewho make assessment decisions (deciding a student’s grades etc.), it is importantto ensure that reviews are of a good quality.Meta-reviewing can be defined as the process of reviewing reviews, i.e., theprocess of identifying the quality of reviews. Meta-reviewing is a manual processand just as with any process that is manual; meta-reviewing too is (a) slow, (b)prone to errors and is (c) likely to be inconsistent. Our work aims to automate theprocess of determining review quality or meta-reviewing. An automated reviewprocess ensures consistent (bias-free) reviews to all reviewers. This also ensuresprovision of immediate feedback to reviewers, which is likely to motivate thereviewer to improve his work and provide more useful feedback to the authors.In order to determine review quality, our work identifies the content and tone ofa review along with a quantity of tokens it contains. In evaluating a review forcontent, we are trying to determine whether it helped the author identifydeficiencies. A good review should furnish guidance, rather than just praising ordenigrating the work. The tone of a review is classified as positive, negative, orneutral. The number-of-tokens metric measures the quantity of feedbackprovided by the reviewer.To determine content and tone of a review, we use machine-learning techniquessuch as Latent Semantic Analysis, along with the Cosine similarity metric. Ourpreliminary analysis to use the model in predicting the content and tone ofreviews yielded average f-measure values of up to 64%. This is much better thanthe average baseline accuracy of the classification system, which is around 29%.Our approach also predicts meta-review scores for a review, by computingsimilarity of a new review to reviews that have previously been meta-reviewedby human evaluators. An initial evaluation of our technique produced accuracyvalues greater than 50% in predicting meta-review scores in the range of 1 to 5(Likert scale). This is better than randomly assigning a score in the range of 1 to5, which would have a baseline accuracy of 20%.We plan on extending the process of review quality identification to includerelevance of the review to the submission that the review was written for as wellas its relevance to existing good quality reviews. Relevance is determined byusing a graph-based matching technique and metrics that help identify textparaphrasing and word sense similarities. From a preliminary study in thisdomain we found that syntactic information provided by the graph structures andsemantic relationships determined by the paraphrasing metrics play an importantrole in determining relevance.Our experiments are conducted using data provided by Expertiza, a web-basedcollaborative learning environment, that supports reviewing and meta-reviewing.

Ramachandran, L., & Gehringer, E. F. (2012, June), Automatic Quality Assessment for Peer Reviews of Student Work Paper presented at 2012 ASEE Annual Conference & Exposition, San Antonio, Texas. 10.18260/1-2--21005

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2012 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015