Asee peer logo

Board 62: Work in progress: A Comparative Analysis of Large Language Models and NLP Algorithms to Enhance Student Reflection Summaries

Download Paper |

Conference

2024 ASEE Annual Conference & Exposition

Location

Portland, Oregon

Publication Date

June 23, 2024

Start Date

June 23, 2024

End Date

July 12, 2024

Conference Session

Computers in Education Division (COED) Poster Session

Tagged Division

Computers in Education Division (COED)

Tagged Topic

Diversity

Permanent URL

https://strategy.asee.org/47060

Request a correction

Paper Authors

biography

Ahmed Ashraf Butt Carnegie Mellon University Orcid 16x16 orcid.org/0000-0003-2047-8493

visit author page

Ahmed Ashraf Butt has recently completed his Ph.D. in the School of Engineering Education at Purdue University, where he cultivated a multidisciplinary research portfolio bridging learning science, Human-Computer Interaction (HCI), and engineering education. His primary research focuses on designing and developing educational technologies that facilitate different student learning aspects (e.g., engagement). Further, he is interested in designing instructional interventions and exploring their relationship with different aspects of first-year engineering (FYE) students’ learning (e.g., motivation and learning strategies). Before Purdue University, he worked as a lecturer at the University of Lahore, Pakistan. Additionally, he has been associated with the software industry in various capacities, from developer to consultant.

visit author page

biography

Eesha tur razia babar University of California, Irvine

visit author page

Eesha Tur Razia Babar holds a master's degree in Electrical and Computer Engineering from the University of California, Irvine. She completed her undergraduate studies in Electrical Engineering at the University of Engineering and Technology in Lahore, Pakistan. Her primary research interests include educational technology, educational data mining, and educational data science.

visit author page

biography

Muhsin Menekse Purdue University, West Lafayette

visit author page

Muhsin Menekse is an Associate Professor at Purdue University with a joint appointment in the School of Engineering Education and the Department of Curriculum & Instruction. Dr. Menekse's primary research focuses on exploring K-16 students' engagement and learning of engineering and science concepts by creating innovative instructional resources and conducting interdisciplinary quasi-experimental research studies in and out of classroom environments. Dr. Menekse is the recipient of the 2014 William Elgin Wickenden Award by the American Society for Engineering Education. He is also selected as an NSF SIARM fellow for the advanced research methods for STEM education research. Dr. Menekse received four Seed-for-Success Awards (in 2017, 2018, 2019, and 2021) from Purdue University's Excellence in Research Awards programs in recognition of obtaining four external grants of $1 million or more during each year. His research has been generously funded by grants from the Institute of Education Sciences (IES), the U.S. Department of Defense (DoD), Purdue Research Foundation (PRF), and the National Science Foundation (NSF).

visit author page

author page

Ali Alhaddad Purdue University, West Lafayette

Download Paper |

Abstract

The advent of state-of-the-art large language models has led to remarkable progress in condensing enormous amounts of information into concise and coherent summaries, benefiting fields like education, health, and public policy, etc. This study contributes to the current effort by investigating two NLP approaches’ effectiveness in summarizing students’ reflection text. This approach includes Natural Language Processing (NLP) algorithms customized for summarizing students’ reflections and ChatGPT, a state-of-the-art large language model. To conduct the study, we used the CourseMIRROR application to collect students’ reflections from two sections of the engineering course at a large Midwestern university. Over the semester, students were asked to reflect after each lecture on two aspects of their learning experience, i.e., what they found 1) interesting and 2) confusing in the lecture? In total, we collected reflections from 42 lectures, and the average class size was 80 students in each section. To inform the study, we generated a reflection summary for all reflection submissions in each lecture using both NLP approaches and human annotators. Furthermore, we evaluated the quality of reflection summaries by assessing the ROUGE-N measure for each lecture’s reflection summary generated by all three approaches. These summaries were then aggregated for each approach by averaging the ROUGE-N scores. Subsequently, we used ANOVA to determine significant differences between the average ROUGE scores of the two NLP approaches and human-generated reflection summaries. Preliminary findings suggest that NLP algorithms outperformed ChatGPT for creating reflection summaries. This finding implies that, despite being trained on a large corpus of textual data, the prominent large language model ChatGPT still requires improvements to surpass or match the performance of NLP algorithms tailored for solving custom problems.

Butt, A. A., & babar, E. T. R., & Menekse, M., & Alhaddad, A. (2024, June), Board 62: Work in progress: A Comparative Analysis of Large Language Models and NLP Algorithms to Enhance Student Reflection Summaries Paper presented at 2024 ASEE Annual Conference & Exposition, Portland, Oregon. https://strategy.asee.org/47060

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2024 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015