Asee peer logo

Using Natural Language Processing Tools to Classify Student Responses to Open-Ended Engineering Problems in Large Classes

Download Paper |


2014 ASEE Annual Conference & Exposition


Indianapolis, Indiana

Publication Date

June 15, 2014

Start Date

June 15, 2014

End Date

June 18, 2014



Conference Session

Data Analytics in Education

Tagged Division

Computers in Education

Page Count


Page Numbers

24.1338.1 - 24.1338.15



Permanent URL

Download Count


Request a correction

Paper Authors


Matthew A. Verleger Embry-Riddle Aeronautical Univ., Daytona Beach

visit author page

Matthew Verleger is Assistant Professor in Freshman Engineering at Embry-Riddle Aeronautical University. He has a BS in Computer Engineering, an MS in Agricultural & Biological Engineering, and a PhD in Engineering Education, all from Purdue University. Prior to joining the Embry-Riddle faculty, he spent two years as an Assistant Professor of Engineering Education at Utah State University. His research interests include Model-Eliciting Activities, online learning, and the development of software tools to facilitate student learning.

visit author page

Download Paper |


Using Natural Language Processing Tools to Classify Student Responses to Open-Ended Engineering Problems in Large ClassesPeer review can be a beneficial pedagogical tool for providing students both feedback and variedperspectives. Despite being a valuable tool, the best mechanism for assigning reviewers toreviewees is still often blind random assignment. This research represents the first step in alarger effort to find an improved method for matching reviewers to reviewees. By automatingthe classification of student work, reviewer quality and reviewee need can be assessed. With thatassessment, the best reviewers can be assigned to the neediest teams, while the most self-sufficient teams can be assigned reviewers who may need to see higher quality work.The purpose of this paper is to present the preliminary findings from an effort to classify studentteam performance on Model-Eliciting Activities (MEAs) using natural language processingtools. MEAs are realistic, open-ended, client-driven engineering problems where teams ofstudents produce a written document describing the steps of how to solve the problem.Archival data containing expert evaluations to MEAs were used to test different natural languageprocessing tools in an attempt to identify which tools could most accurately assign scores similarto an expert. The research did not re-implement the selected algorithms, but rather used off-the-shelf libraries to explore the value of their application to this context.Using a split-sample training-testing set, the “Bagged Decision Tree” and “Random Forest”algorithms were used to classify sample solutions against 11 MEA rubric dimensions.Performance on each rubric item averaged between 60% and 85% accurate, depending on theitem. The implementation of these algorithms also revealed words and phrases commonly usedin higher quality samples.This paper will focus on how the data was obtained and prepared, how the different algorithmswere utilized, how the algorithms performed in the classification tests, what the results indicateabout our implementation of MEAs and how the results will be informing the next stages of theresearch project.  

Verleger, M. A. (2014, June), Using Natural Language Processing Tools to Classify Student Responses to Open-Ended Engineering Problems in Large Classes Paper presented at 2014 ASEE Annual Conference & Exposition, Indianapolis, Indiana. 10.18260/1-2--23271

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2014 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015