Portland, Oregon
June 23, 2024
June 23, 2024
June 26, 2024
Diversity and NSF Grantees Poster Session
9
10.18260/1-2--46835
https://peer.asee.org/46835
293
Xishuang Dong is Assistant Professor of Electrical and Computer Engineering Department, Roy G. Perry College of Engineering, Prairie View A&M University. His research interests include foundation AI, deep learning, object detection, natural language processing, computer systems biology, and Internet of Things.
Dr. Yujian Fu is an associate professor of computer science department at Alabama A&M University. Her research interests fall in formal verification of cyber physical systems, behavioral analysis of mobile security, software architecture and design analys
Ming-Mu Kuo is a doctoral student of Electrical and Computer Engineering Department, Senior Data Analyst of Office of Institutional Research & Effectiveness, Prairie View A&M University. His research interests include online education, computer assisted language learning, knowledge tracing, and deep learning.
Dr. Xiangfang Li is an associate professor in the Department of Electrical and Computer Engineering at Prairie View A&M University. Her research interests encompass computational biology, computer networking and communications, and machine learning and AI.
Unlike many teaching pedagogies, such as evidence-based learning, personalized adaptive learning (PAL) takes a distinct approach by monitoring the progress of each individual student and tailoring the learning path to their specific knowledge and needs. Rather than providing a one-size-fits-all approach, PAL customizes the learning experience for each student. To implement PAL effectively, one essential technique is knowledge tracing that models students’ knowledge over time, enabling predictions about their performance in future interactions. Based on these predictions, resources and learning paths can be recommended to students according to their individual requirements. Additionally, content that is anticipated to be too easy or too difficult can be either skipped or delayed. In recent years, deep learning technologies have been successfully applied to enhance knowledge tracking, known as Deep Knowledge Tracing (DKT). This paper introduces a novel approach based on Large Language Models (LLMs) to further improve DKT. LLMs are deep learning models trained on extensive datasets using self-supervised and semi-supervised learning techniques. Prominent examples of LLMs include BERT, GPT, GPT-4, LLaMA, and Claude, all of which have demonstrated remarkable performance across a wide spectrum of natural language processing (NLP) tasks. This paper is to alleviate data sparsity issues related to one-hot encoding of student learning records. This is achieved by representing these records using LLMs. The representation process involves designing various prompts to encourage LLMs to establish correlations between different elements within the learning records. To validate the proposed method, extensive experiments will be conducted using multiple datasets, including ASSISTments (2015 and 2017), KDD Cup 2010, and NIPS 2020.
Dong, X., & Fu, Y., & Kuo, M., & Sarker, S., & Qian, L., & Li, X. (2024, June), Board 262: Enhancing Deep Knowledge Tracing via Diffusion Models for Personalized Adaptive Learning Paper presented at 2024 ASEE Annual Conference & Exposition, Portland, Oregon. 10.18260/1-2--46835
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2024 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015