participate but also to explain theimportance of AI in science to their peers and community. This enabled scholars to feel apersonal connection as their scientific project was envisioned within a real-world context. Figure 2. Google Teachable Machine [16].Measures and data sourcesThe self-reports of the children’s self-efficacy for AI were collected via a survey administeredon Qualtrics before and after the Shark AI program. Self-efficacy for AI was assessed using anadapted version of the original Science subscale (9 items) and the Technology and Engineeringsubscale (9 items) of the widely used 37-item S-STEM questionnaire developed by NorthCarolina State University’s Friday Institute [19]. Only the Science and Technology
different challenge for repeat attempts. The goal of the pilot was to measure theimpact on students' study habits, self-efficacy, and learning outcomes. Students completed a25-item survey regarding knowledge of course content and self-efficacy, at the start and end ofthe course. At the end of each chapter, students were offered the self-assessment quiz, followedby a brief survey on the insights the student gained about their understanding of the material, andimpact on study habits and self-efficacy. This paper presents exploratory analyses examiningstudents' self-assessment quiz usage patterns through the course, quantifying students'engagement with the self-assessment quizzes, and gathering insights into whether students foundthe self-assessment
the perceived challenges of live streaming as an informal learning opportunity forcomputer science students?Through this work, we aim to understand and evaluate whether or not live streaming impacts anundergraduate student’s perceived self-efficacy in software or game development, RQ1 . Toquantitatively measure self-efficacy, we have adapted questions from Ramalingam andWiedenbeck’s Computer Programming Self-Efficacy Scale and Hiranrat et al.’s surveymeasurements for software development career [41, 42]. As we allow the students to choose theirown projects and set their own goals, we expect there to be some division among the participantson how quickly they believe themselves to be improved based on the gravity of the goals they setfor
research exists on theuse of case studies to motivate non-STEM majors to study technological topics, particularly incontexts where hands-on technology activities complement the case study by exploring itsunderlying themes and demonstrating the significance of the technology. In this course, the casestudies serve an additional purpose; they provide real-world examples of the impact of eitherembracing or ignoring a new technology.Self-efficacy refers to the confidence in one’s ability to accomplish specific tasks, and enhancingstudents’ self-efficacy increases the likelihood of achieving desired outcomes [4, 5]. Researchacross various disciplines highlights the critical role of experiential learning in buildingself-efficacy. For example, educators
in Science, Technology, Engineering, and Math (STEM) professions haslong been a problem, especially among minority and female students. According to studies,structural impediments such as a lack of mentorship, limited access to research opportunities,and budgetary restrictions disproportionately affect these populations [1], [2]. To address thesediscrepancies, the ARROWS program at North Carolina A&T State University has taken aholistic strategy that focuses on mentorship, hands-on research, and a supportive academicatmosphere.Mentorship, defined as experienced persons guiding mentees through academic and professionalproblems, has been demonstrated to dramatically increase retention rates [3]. For example, itpromotes self-efficacy
andquantitative measures. Qualitatively, we will assess student engagement and self-efficacy throughLikert-scale surveys. Quantitatively, we will compare task completion times and scores to eval-uate learning outcomes. By automating tree validation and grading, the tool not only enhancesengagement but also improves teaching efficiency.1 IntroductionParse trees, or syntax trees, are essential in computer science education as they represent thehierarchical structure of programming language expressions. They are fundamental in under-standing syntax analysis, compiler construction, and language processing algorithms. However,traditional teaching methods often involve manually constructing syntax trees through static dia-grams or hand-drawn exercises. While
Christine Alvarado. 2021. The Relationship Between Sense of Belonging and Student Outcomes in CS1 and Beyond. In Proceedings of the 17th ACM Conference on International Computing Education Research (Virtual Event, USA) (ICER 2021). Association for Computing Machinery, New York, NY, USA, 29–41. https://doi.org/10.1145/3446871.3469748[4] Alex Lishinski and Joshua Rosenberg. 2021. All the Pieces Matter: The Relationship of Momentary Self-efficacy and Affective Experiences with CS1 Achievement and Interest in Computing. In Proceedings of the 17th ACM Conference on International Computing Education Research (Virtual Event, USA) (ICER 2021). Association for Computing Ma- chinery, New York, NY, USA, 252–265. https://doi.org/10.1145
educationalsettings," Journal of Applied Psychology, vol. 28, no. 3, pp. 211-224, 2022.[13] B. Cook-Chennault and V. Villanueva, "Student anxiety in competitive educational games,"Educational Psychology Review, vol. 42, no. 1, pp. 83-95, 2020.[14] A. Cook-Chennault and V. Villanueva, "Inclusive game design in engineering education,"Journal of Diversity in Higher Education, vol. 19, no. 2, pp. 105-118, 2020.[15] R. M. Marra, K. A. Rodgers, D. Shen, and B. Bogue, “Women Engineering Students and Self‐Efficacy: A Multi‐Year, Multi‐Institution Study of Women Engineering Student Self‐Efficacy,” J.of Engineering Edu., vol. 98, no. 1, pp. 27–38, Jan. 2009, doi: 10.1002/j.2168-9830.2009.tb01003.x.[16] M. A. Hutchison, D. K. Follman, M. Sumpter, and G. M. Bodner
University of Florida (UF). Her research focuses on self-efficacy and critical mentorship in engineering and computing. She is passionate about broadening participation and leveraging evidence-based approaches to improve the engineering education environment for minoritized individuals.Victor PerezSTEPHANIE KILLINGSWORTH, University of Florida ©American Society for Engineering Education, 2025 WIP: One Teacher’s Experience Adapting an Innovative, Flexible Computer Vision Curriculum in a Middle School Science ClassroomIntroductionArtificial intelligence (AI) is predicted to be one of the most disruptive technologies in the 21stcentury [1], and to prepare all young people to live and work in an AI
teachers in rural areas. It measures teachers’ perceptions about rural life, activities and behaviors as well as relationships with persons in the rural community.The RIS showed an acceptable internal reliability of α = 0.72−0.83 which boasts of its effectiveness in capturing rural identity. The teacher mindset survey, carved out of [47] and [48], was avital instrument in supplying the valuable insights into diverse aspects of teachers’ mindsets. It measures parameters such as concerns on social comparison, self-efficacy, comfort being oneself, measurement of task value, as well as the perceived costs of participating in the training program. Each survey item were measured on a 5-point Likert scale, with 1 being“strongly disagree” to 5 being
critical feedback can play a key role in motivation to continue learning. Onestudy [37] on critical constructive feedback (CCF) with student control showed that for low-achieving students, the presence of a TA led them to choose CCF more often and neglect CCFresults less. This was in a game-based AI agent-supported learning environment for history lessonsthat gave students ownership of the communication with the AI agent. Another game-based studydiscussed how the self-efficacy (SE) of a tutee agent has an impact on students’ performance [38].However, the conversation was scripted, the questions were multiple-choice, and feedback wascommunicated through a chat window. In view of their results, the authors recommended not onlydesigning more SE into
, 30, 34]. Hence, the majority of stud-ies reporting benefits of LLMs focus more on student engagement, interaction patterns, and be-haviors, or student perceptions, such as satisfaction, perceived benefit, self-efficacy, or motiva-tion [33, 37, 25, 39, 40, 41, 26, 42, 7, 43, 27, 44, 45, 46].We discuss the relevance of this work at further length in Section 5 but note here that our studydiffers significantly in context, as our tasks are not assessing programming ability specifically,but broader knowledge and problem-solving skills related to computer engineering and embeddedsystems.3 MethodsTo test the potential impact of LLMs in SRL, we designed a 2-stage study consisting of a coun-terbalanced repeated measures experiment, and a
3.95 0.75 -0.27 0.08 15 3.3 4.12 0.83 -0.2 0.04 16 2.7 4.25 1.55 0.53 0.28 17 3.15 4.02 0.87 -0.15 0.02 M: 1.02 SS: 0.44 M: mean of the difference between the two surveys (Post-Pre) SS: sum of squares of deviations3.2.2 Descriptive Results of Survey QuestionsThis section provides a detailed descriptive results of the survey questions.Q3 - Self-Efficacy in Problem Solving: Rate your confidence in solving programming
generative artificial intelligence and learning achievement: serial mediating roles of self-efficacy and cognitive engagement," Frontiers in Psychology, vol. 14, 2023.[13] R. Michel-Villarreal, E. Vilalta-Perdomo, D. E. Salinas-Navarro, R. Thierry-Aguilera and F. S. Gerardou, "Challenges and Opportunities of Generative AI for Higher Education as Explained by ChatGPT," Education Sciences, vol. 13, no. 9, p. 856, 2023.[14] Y. Walter, "Embracing the future of Artificial Intelligence in the classroom: the relevance of AI literacy, prompt engineering, and critical thinking in modern education," International Journal of Educational Technology in Higher Education, vol. 21, no. 15, 2024.[15] X. Zhai, "ChatGPT User Experience
the curriculuminclude anxiety [9], self-efficacy [10], attitude, perceived ease of use/technology acceptance [11]and perceived usefulness. Furthermore, there is evidence that suggests that as the number ofinstructional technologies available at institutions grow, faculty are less likely to use them [12]due to lack of interest/capacity to use the tool, self-efficacy and personal ideals in pedagogy.Trouble points in utilization include underestimating the complexities of using any newtechnology including formulation of instructor comfortability and knowledge as well as the timerequired to deliver courses using different technology platforms [13-15].Schroeder [16] recently projected a short-term vision of AI in higher education, including
. Fatade, “Attitudes towards Computer and Computer Self-Efficacy as Predictors of Preservice Mathematics Teachers’ Computer Anxiety,” Acta Didactica Napocensia, vol. 10, no. 3, pp. 91–108, Nov. 2017, doi: 10.24193/adn.10.3.9.[10] D. Abubakar and K. H. Kmc, “Relationship of User Education, Computer Literacy and Information And Communication Technology Accessibility And Use Of E-Resources By Postgraduate Students In Nigerian University Libraries,” Library Philosophy and Practice (e-Jpurnal), 10-June-2017 [Online]. Available: https://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=4474&context=libphilprac [Accessed: 1-Feb-2023].[11] A. R. Henson, “The impact of computer efficacy on the success of
higher Education, vol. 27, no. 3, pp. 275–286, 2002.[17] E. B. Nuhfer, “The place of formative evaluations in assessment and ways to reap their benefits,” Journal of Geoscience Education, vol. 44, no. 4, pp. 385–394, 1996.[18] F. Fitriani, “Implementing authentic assessment of curriculum 2013: Teacher’s problems and solusions,” Getsempena English Education Journal, vol. 4, no. 2, 2017.[19] R. Yilmaz and F. G. K. Yilmaz, “The effect of generative artificial intelligence (ai)-based tool use on students’ computational thinking skills, programming self-efficacy and motivation,” Computers and Education: Artificial Intelligence, vol. 4, p. 100147, 2023.[20] V. Roger-Monz´o, “Impact of generative artificial intelligence in higher
that examined the impact ofwagering and iterative feedback on engagement and performance, and (2) a classroom studyinvolving 24 students in a sophomore-level Industrial Engineering course that explored real-world application and metacognitive effects. Results from the controlled experiment showedwagering and feedback led to significant improvements in student engagement measured interms of interest, enjoyment, and concentration. However, immediate performance gains werenot observed. The classroom study revealed high levels of voluntary engagement, with studentssolving ten times as many problems as in traditional assignments and demonstrating wageringpatterns indicative of metacognition. These findings offer insights into how gamified
enhances studentperformance.By analyzing metrics such as completion rates and common student errors, we identified keyareas where learners struggled and addressed them by scaffolding the activities into smallercomponents. This approach, shown to enhance knowledge retention and self-efficacy [9], [10],proved especially effective for challenging topics with high struggle rates as well as forintroductory topics where students needed extra guidance. The observed reduction in averagefailure rates from 12.90% to 4.35% (an 8.55 percentage point decrease) demonstrates the valueof our method in promoting mastery and reducing student frustration, aligning with studies thatadvocate iterative assessment designs for better learning outcomes [11].Case Study #1
and SBL tools enhance cognitiveunderstanding, they may need to be supplemented with additional instructional strategies toinfluence affective factors such as interest, self-efficacy, and career aspirations. Previous researchsuggests that attitudes toward highly specialized technical fields often require extended exposureand real-world applications to shift meaningfully [17, 18]. Future implementations could explorestrategies such as incorporating mentorship programs, project-based learning, or industrycollaborations to strengthen students’ sense of engagement and belonging in QC. Furthermore,engagement and usability ratings (M = 3.90, SD = 0.87) indicate that students generally foundthe tool intuitive and engaging. However, technical
the southeastern US during Spring 2024. Ineach course, we randomly assigned students to an experimental group, who were tasked withcreating SCRVs, and a control group, who were not. We compared the exam scores of students bycondition. We also compared the exam scores of students based on whether they submitted in thelast 3 hours before the deadline or not. We found that, in Course B, the average exam score washigher in the experimental group, while in Course A, there was no significant difference inaverage scores. We also found that early video submission (before 9 PM on the due date) wascorrelated with higher exam scores and vice-versa.IntroductionHistorically, prior programming experience and self-efficacy have been shown to lead students
"...I don't have a fear of it, or anything ease of use effort, self-efficacy/knowledge, like that, ... but trying to figure out interaction with interface, user where the right productive middle experience, familiarity ground of where that was going to be". Output Effective, efficient, usable, higher, “...It was now feasible to use voice Quality faster, clear, correct cloning and AI-generated or synthetic voices, which are indistinguishable from the real voice”.Results and Discussion of InterviewsFindings and interpretations of data from the
control. The classroom experience revealedgains in students’ self-efficacy in engineering design and improvements in ability to recognizekey components of feedback-control systems. Class tests also revealed challenges associatedwith scaffolding both students and teachers at these grade levels and levels of experience orinterest in computational subjects. Students struggled with algorithmic design in particular,which made it harder for them to complete the capstone projects in the curricula. There werealso lessons learned about robust design and instrumentation of physical devices in classes thatmight only use them for a short period of time, posing hurdles for both students and teachers.Software affordances developed for programming and analyzing
cannot capture. These comments identify a broader range of negative andpositive course-related issues, providing deeper, student-centered, context-specific insights thathelp improve teaching outcomes [7, 13]. Free-response feedback can also unveil difficulties stu-dents experience during the course [14]. Moreover, the style of feedback itself can significantly shape the student experience. For in-stance, reflective writing can reveal “personal learning experiences” [8]. Research finds that re-flective journaling improves content comprehension and promotes self-analysis, encourages self-efficacy, fosters student engagement (especially when faculty respond to comments), and strength-ens career skills [4]. While collecting student feedback
institutionalchange, leading some teachers to question the feasibility of long-term CS integration.To support teachers in their professional development, the program offered reimbursement for upto two attempts at the CS teacher certification exam, upon submission of receipts. One teacher,who taught business and math, successfully passed the exam after studying the program’smaterials and engaging with coding exercises. While passing the exam was a measurable success,many teachers explored and implemented engaging CS activities in their classrooms. Teachersintegrated CS concepts in various ways, such as through after-school clubs, free-time activities, orelective courses. Some used program resources to support projects like Unity game development,robotics, and