Montreal, Quebec, Canada
June 22, 2025
June 22, 2025
August 15, 2025
Computers in Education Division (COED) Poster Session (Track 1.A)
Computers in Education Division (COED)
9
10.18260/1-2--55920
https://peer.asee.org/55920
2
Dr. Ahmed Ashraf Butt is an Assistant Professor at the University of Oklahoma. He recently completed his Ph.D. in the School of Engineering Education at Purdue University and pursued post-doctoral training at the School of Computer Science, Carnegie Mellon University (CMU). He has cultivated a multidisciplinary research portfolio bridging learning sciences, Human-Computer Interaction (HCI), and engineering education. His primary research focuses on designing and developing educational technologies that facilitate various aspects of student learning, such as engagement. Additionally, he is interested in designing instructional interventions and exploring their relationship with first-year engineering (FYE) students’ learning aspects, including motivation and learning strategies. Prior to his time at Purdue, Dr. Butt worked as a lecturer at the University of Lahore, Pakistan, and has been associated with the software industry in various capacities.
Saira Anwar is an Assistant Professor at the Department of Multidisciplinary Engineering, Texas A and M University, College Station. She received her Ph.D. in Engineering Education from the School of Engineering Education, Purdue University, USA. The Department of Energy, National Science Foundation, and industry sponsors fund her research. Her research potential and the implication of her work are recognized through national and international awards, including the 2023 NSTA/NARST Research Worth Reading award for her publication in the Journal of Research in Science Teaching, 2023 New Faculty Fellow award by IEEE ASEE Frontiers in Education Conference, 2022 Apprentice Faculty Grant award by the ERM Division, ASEE, and 2020 outstanding researcher award by the School of Engineering Education, Purdue University. Dr. Anwar has over 20 years of teaching experience at various national and international universities, including the Texas A and M University - USA, University of Florida - USA, and Forman Christian College University - Pakistan. She also received outstanding teacher awards in 2013 and 2006. Also, she received the "President of Pakistan Merit and Talent Scholarship" for her undergraduate studies.
Writing high-quality learning objectives is crucial in the design of an effective curriculum. These learning objectives help the instructor align the course components (e.g., content, assessment, and pedagogy) to provide students with a good learning experience. However, as most instructors do not have formal education training, they lack the experience and expertise to write quality learning objectives. The lack of quality learning objectives could lead to misalignment in course components, where students often complain of variation between what is taught in class and what is assessed in the exams. In this Work-in-Progress paper, we argue that Generative AI, the recent advancement in AI, has the potential to assist in improving the quality of learning objectives by providing real-time scaffolding and feedback. Also, the SMART(Specific, Measurable, Attainable, Relevant, and Time-bound) - a widely recognized best practice for crafting clear and compelling learning objectives- can be used as the criteria for evaluating learning objectives. In this regard, we collected 100 learning objectives from various STEM course curricula that are publicly available online. Using the SMART criteria, we evaluated each learning objective using two approaches. 1) Human experts generated feedback, and 2) Generative AI model (i.e., GPT; generative pre-trained transformer) based feedback. More specifically, we addressed the following research questions: How well does GPT feedback match human experts when evaluating course learning objectives using the SMART framework? We used Cohen’s Kappa to assess the level of agreement between GPT and human experts’ evaluations. Also, we did a qualitative analysis of learning objectives with strong disagreement among evaluations. Our findings showed that the GPT has a reasonable agreement in evaluating learning objectives’ “Relevant” aspects. However, there was an inconsistency in assessing the other criteria by the GPT compared to human evaluation. The potential issues could be the AI’s need for more contextual understanding, such as how the assessment is applied, access to broader course structure, learner needs, etc. Overall, the result suggests that while GPT could assess a certain part of learning objectives effectively, further refinement of GPT with more contextual information is needed. Furthermore, we plan to build a scaffolding for the instructor to provide instructors with real-time feedback while they work on their learning objectives after improving the current AI approaches. This study contributes to the literature, exploring ways to use AI in education, helping teachers make informed decisions with minimal work, and facilitating student’s learning.
Butt, A. A., & Anwar, S., & Kardgar, A. (2025, June), BOARD #103: Work-in-progress: Evaluating Course Learning Objectives with Generative AI using SMART criteria Paper presented at 2025 ASEE Annual Conference & Exposition , Montreal, Quebec, Canada . 10.18260/1-2--55920
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2025 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015