Portland, Oregon
June 23, 2024
June 23, 2024
June 26, 2024
Data Science & Analytics Constituent Committee (DSA)
16
10.18260/1-2--48541
https://peer.asee.org/48541
100
Paula is a Ph.D. Candidate at Queen's University. She is working towards a neural network model to provide automated feedback to open-ended student work in the context of complex problem-solving in Engineering Education (Co-supervised by Dr. Brian Frank from the Department of Electrical and Computing Engineering and Dr. Julian Ortiz from the Robert M. Buchan Department of Mining). She is a Geologist with an M.Sc. from Universidad de Chile (Chile) and an M.Sc. in Mining Engineering (Geostatistics) from the University of Alberta (Canada).
Brian Frank is the DuPont Canada Chair in Engineering Education Research and Development, and the Director of Program Development in the Faculty of Engineering and Applied Science at Queen's University where he works on engineering curriculum development,
Dr. Ortiz is a Mining Engineer from Universidad de Chile and Ph.D. from University of Alberta. Currently, he is Professor and Mark Cutifani / Anglo American Chair in Mining Innovation at University of Exeter - Camborne School of Mines, in the United Kingdom, where he conducts research related to geostatistical ore body estimation and simulation, and geometallurgical modeling using statistical learning. Dr. Ortiz's previous roles include Head of Department at Queen’s University and Universidad de Chile.
This paper presents work in progress (WIP) toward using artificial intelligence (AI), specifically through Large Language Models (LLM), to support rapid quality feedback mechanisms within engineering educational settings. It describes applying to LLMs to improve the feedback processes by providing information directly to students, graders, or course instructors teaching courses focused on complex engineering problem-solving. We detail how fine-tuning an LLM with a small dataset from diverse problem scenarios achieves classification accuracies close to approximately 80%, even in new problems not included in the fine-tuning process. Traditionally, open-source LLMs, like BERT, have been fine-tuned in large datasets for specific domain tasks. Our results suggest this may not be as critical in achieving good performances as previously thought. Our findings demonstrated the potential for applying AI-supported personalized feedback through high-level prompts incentivizing students to critically self-assess their problem-solving process and communication. However, this study also highlights the need for further research into how semantic diversity and synthetic data augmentation can optimize training datasets and impact model performance.
Larrondo, P. F., & Frank, B. M., & Ortiz, J. (2024, June), Work-in-Progress: Fine-Tuning Large Language Models for Automated Feedback in Complex Engineering Problem-Solving Paper presented at 2024 ASEE Annual Conference & Exposition, Portland, Oregon. 10.18260/1-2--48541
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2024 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015