Asee peer logo

Board 50: Work in Progress: A Systematic Review of Embedding Large Language Models in Engineering and Computing Education

Download Paper |

Conference

2024 ASEE Annual Conference & Exposition

Location

Portland, Oregon

Publication Date

June 23, 2024

Start Date

June 23, 2024

End Date

July 12, 2024

Conference Session

Computers in Education Division (COED) Poster Session

Tagged Division

Computers in Education Division (COED)

Permanent URL

https://strategy.asee.org/47047

Request a correction

Paper Authors

biography

David Reeping University of Cincinnati Orcid 16x16 orcid.org/0000-0002-0803-7532

visit author page

Dr. David Reeping is an Assistant Professor in the Department of Engineering and Computing Education at the University of Cincinnati. He earned his Ph.D. in Engineering Education from Virginia Tech and was a National Science Foundation Graduate Research Fellow. He received his B.S. in Engineering Education with a Mathematics minor from Ohio Northern University. His main research interests include transfer student information asymmetries, threshold concepts, curricular complexity, and advancing quantitative and fully integrated mixed methods.

visit author page

author page

Aarohi Shah University of Cincinnati

Download Paper |

Abstract

This work-in-progress paper explores how students and faculty are employing large language models (LLMs) like ChatGPT in engineering and computing education contexts through a systematic literature review (SLR). As seen in the myriad opinion pieces and articles in the popular press and much to the concern of instructors, students are leveraging AI models such as ChatGPT to complete their assignments – bringing discussions of academic dishonesty to the forefront. However, the use of LLMs like ChatGPT is not entirely fraught with threats to education, work has also emerged about faculty experimenting with incorporating these models into their teaching and evaluation methods. Moreover, categorizing all student use of LLMs as a violation of academic integrity is unproductive; similar work has emerged to explore student perceptions and use cases. Despite the proliferation of manuscripts offering methods for incorporating LLMs in their teaching, much of the advice either does not elaborate on practical use cases across disciplines or does not offer any data to support the efficacy of the use case. Thus, we will delve into the implementation of different approaches to using LLMs like ChatGPT in engineering and computing education, examining how these tools are being leveraged for pedagogical and assessment purposes.

The research question guiding this work is: “how are students and faculty using LLMs (i.e., ChatGPT) in engineering and computing education contexts for instruction and assessment?” To provide a comprehensive understanding of the current landscape, an SLR was conducted, specifically culling papers from Arxiv – assuming that much of the work related to using LLMs in education was still under peer review. We first selected a set of “sentinel articles,” which are articles selected beforehand that we were interested in extracting for analysis to help develop a set of keywords and form the search string. The search string was a combination of general terms such as “large language model” and specific models “GPT-3.5.” These were combined with keywords like “education” to capture a breadth of papers. The initial search returned 717 papers. Our main inclusion criterion was that the paper must be situated in an engineering or computing education context. After evaluating the papers at the abstract level, 51 papers were identified as meeting the inclusion criterion, whereas 49 needed to be checked further for alignment – typically consisting of papers that were situated in education broadly. The remaining 617 papers were determined to be out of scope for the purposes of this study. We are currently reviewing the 110 papers further to refine the final sample of papers by applying checks for quality to pull studies based on actual implementations of the technology.

Our initial results suggest that the papers are converging around a set of common use cases for ChatGPT and similar models, such as leveraging ChatGPT for authoring learning outcomes. Papers also detail the development and utilization of distinct fine-tuned engines designed for personalized interaction with students, serving as customized tutors by generating practice problems in areas like computing, physics, and mathematics. Diverse applications in assessment are also present, such as generating multiple-choice questions and feedback using LLMs. We plan to include detailed, concrete examples of ChatGPT's practical applications.

We anticipate this research to be primarily useful for faculty in engineering and computing education and educational researchers exploring the potential uses and impacts of LLMs. By extracting actionable first steps for instructors from our sample of manuscripts, we aim to contribute more comprehensive strategies to the literature by synthesizing the necessary components to translate this emerging technology into the classroom.

Reeping, D., & Shah, A. (2024, June), Board 50: Work in Progress: A Systematic Review of Embedding Large Language Models in Engineering and Computing Education Paper presented at 2024 ASEE Annual Conference & Exposition, Portland, Oregon. https://strategy.asee.org/47047

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2024 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015