San Antonio, Texas
June 10, 2012
June 10, 2012
June 13, 2012
Computing & Information Technology
25.245.1 - 25.245.12
Automatic Quality Assessment for Peer Reviews of Student WorkReviews are text-based feedback provided by reviewers to the authors of asubmission. Since reviews play a crucial role in providing feedback to peoplewho make assessment decisions (deciding a student’s grades etc.), it is importantto ensure that reviews are of a good quality.Meta-reviewing can be defined as the process of reviewing reviews, i.e., theprocess of identifying the quality of reviews. Meta-reviewing is a manual processand just as with any process that is manual; meta-reviewing too is (a) slow, (b)prone to errors and is (c) likely to be inconsistent. Our work aims to automate theprocess of determining review quality or meta-reviewing. An automated reviewprocess ensures consistent (bias-free) reviews to all reviewers. This also ensuresprovision of immediate feedback to reviewers, which is likely to motivate thereviewer to improve his work and provide more useful feedback to the authors.In order to determine review quality, our work identifies the content and tone ofa review along with a quantity of tokens it contains. In evaluating a review forcontent, we are trying to determine whether it helped the author identifydeficiencies. A good review should furnish guidance, rather than just praising ordenigrating the work. The tone of a review is classified as positive, negative, orneutral. The number-of-tokens metric measures the quantity of feedbackprovided by the reviewer.To determine content and tone of a review, we use machine-learning techniquessuch as Latent Semantic Analysis, along with the Cosine similarity metric. Ourpreliminary analysis to use the model in predicting the content and tone ofreviews yielded average f-measure values of up to 64%. This is much better thanthe average baseline accuracy of the classification system, which is around 29%.Our approach also predicts meta-review scores for a review, by computingsimilarity of a new review to reviews that have previously been meta-reviewedby human evaluators. An initial evaluation of our technique produced accuracyvalues greater than 50% in predicting meta-review scores in the range of 1 to 5(Likert scale). This is better than randomly assigning a score in the range of 1 to5, which would have a baseline accuracy of 20%.We plan on extending the process of review quality identification to includerelevance of the review to the submission that the review was written for as wellas its relevance to existing good quality reviews. Relevance is determined byusing a graph-based matching technique and metrics that help identify textparaphrasing and word sense similarities. From a preliminary study in thisdomain we found that syntactic information provided by the graph structures andsemantic relationships determined by the paraphrasing metrics play an importantrole in determining relevance.Our experiments are conducted using data provided by Expertiza, a web-basedcollaborative learning environment, that supports reviewing and meta-reviewing.
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2012 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015