San Antonio, Texas
June 10, 2012
June 10, 2012
June 13, 2012
Software Engineering Constituent Committee
25.154.1 - 25.154.11
An Automated Approach to Assessing the Quality of Code ReviewsPeer review of code and other software documents is an integral component of asoftware development life cycle. In software engineering courses, students in aclass can peer-review others in the class. In order to help students improve theirreviewing skills, feedback needs to be provided for the reviews written bystudents.The process of reviewing a review can be referred to as meta-reviewing. It is theprocess of evaluating review quality. Meta-reviewing is at present carried outmanually, and just as with any process that is manual, meta-reviewing too is (a)slow, (b) error-prone and (c) inconsistent. We are trying to address the problemof automating meta-reviewing. An automated review process ensures consistent(bias-free) reviews to all reviewers. It can also provide immediate feedback toreviewers, which is likely to motivate the reviewer to improve his/her work andprovide more useful feedback to the authors.Our metrics for evaluating review quality for textual assignments include contentand tone of the review, number of tokens in the review text and a review’srelevance to the submission. For reviews of textual submissions, the focus islikely to be more on syntax and semantics of the text. We conducted somepreliminary analysis to calculate textual metrics such as content, tone andnumber of tokens of reviews and evaluated their usefulness in predicting meta-review scores for the reviews. We observed accuracy values greater than 50%,which happens to be better than the baseline accuracy of 20%.Our approach has also produced promising results for the identification ofrelevance across textual reviews. We incorporate syntactic and semantic featuresin the relevance identification process and from a preliminary study we foundthat the graph structures used to study syntactic relationships and theparaphrasing metrics used to study semantics were helpful in determiningrelevance.In this presentation, we focus especially on reviews written for code in softwareengineering and related courses, such as object-oriented design. Our aim is toidentify a suitable model using which code reviews can be represented. Forinstance, factors such as identification of certain types of errors or bugs ormention of program keywords or error statements might be important indetermining quality of code reviews. We are gathering data this fall on reviewsof application code and plan to model and evaluate reviews written for code. Weare collecting these reviews using Expertiza, a web-based collaborative learningenvironment. In this paper, we report on how the review process helpedstudents, and apply our automated process to the meta-reviewing of reviews ofapplication code.
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2012 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015