Asee peer logo

An Automated Approach to Assessing the Quality of Code Reviews

Download Paper |

Conference

2012 ASEE Annual Conference & Exposition

Location

San Antonio, Texas

Publication Date

June 10, 2012

Start Date

June 10, 2012

End Date

June 13, 2012

ISSN

2153-5965

Conference Session

Software Engineering Topics

Tagged Division

Software Engineering Constituent Committee

Page Count

11

Page Numbers

25.154.1 - 25.154.11

DOI

10.18260/1-2--20914

Permanent URL

https://peer.asee.org/20914

Download Count

504

Request a correction

Paper Authors

author page

Lakshmi Ramachandran

biography

Edward F. Gehringer North Carolina State University

visit author page

Ed Gehringer is an Associate Professor in the departments of Computer Science and Electrical & Computer Engineering at North Carolina State University. He received his Ph.D. from Purdue University and has also taught at Carnegie Mellon University and Monash University in Australia. His research interests lie mainly in computer-supported cooperative learning.

visit author page

Download Paper |

Abstract

An Automated Approach to Assessing the Quality of Code ReviewsPeer review of code and other software documents is an integral component of asoftware development life cycle. In software engineering courses, students in aclass can peer-review others in the class. In order to help students improve theirreviewing skills, feedback needs to be provided for the reviews written bystudents.The process of reviewing a review can be referred to as meta-reviewing. It is theprocess of evaluating review quality. Meta-reviewing is at present carried outmanually, and just as with any process that is manual, meta-reviewing too is (a)slow, (b) error-prone and (c) inconsistent. We are trying to address the problemof automating meta-reviewing. An automated review process ensures consistent(bias-free) reviews to all reviewers. It can also provide immediate feedback toreviewers, which is likely to motivate the reviewer to improve his/her work andprovide more useful feedback to the authors.Our metrics for evaluating review quality for textual assignments include contentand tone of the review, number of tokens in the review text and a review’srelevance to the submission. For reviews of textual submissions, the focus islikely to be more on syntax and semantics of the text. We conducted somepreliminary analysis to calculate textual metrics such as content, tone andnumber of tokens of reviews and evaluated their usefulness in predicting meta-review scores for the reviews. We observed accuracy values greater than 50%,which happens to be better than the baseline accuracy of 20%.Our approach has also produced promising results for the identification ofrelevance across textual reviews. We incorporate syntactic and semantic featuresin the relevance identification process and from a preliminary study we foundthat the graph structures used to study syntactic relationships and theparaphrasing metrics used to study semantics were helpful in determiningrelevance.In this presentation, we focus especially on reviews written for code in softwareengineering and related courses, such as object-oriented design. Our aim is toidentify a suitable model using which code reviews can be represented. Forinstance, factors such as identification of certain types of errors or bugs ormention of program keywords or error statements might be important indetermining quality of code reviews. We are gathering data this fall on reviewsof application code and plan to model and evaluate reviews written for code. Weare collecting these reviews using Expertiza, a web-based collaborative learningenvironment. In this paper, we report on how the review process helpedstudents, and apply our automated process to the meta-reviewing of reviews ofapplication code.

Ramachandran, L., & Gehringer, E. F. (2012, June), An Automated Approach to Assessing the Quality of Code Reviews Paper presented at 2012 ASEE Annual Conference & Exposition, San Antonio, Texas. 10.18260/1-2--20914

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2012 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015