participate but also to explain theimportance of AI in science to their peers and community. This enabled scholars to feel apersonal connection as their scientific project was envisioned within a real-world context. Figure 2. Google Teachable Machine [16].Measures and data sourcesThe self-reports of the children’s self-efficacy for AI were collected via a survey administeredon Qualtrics before and after the Shark AI program. Self-efficacy for AI was assessed using anadapted version of the original Science subscale (9 items) and the Technology and Engineeringsubscale (9 items) of the widely used 37-item S-STEM questionnaire developed by NorthCarolina State University’s Friday Institute [19]. Only the Science and Technology
different challenge for repeat attempts. The goal of the pilot was to measure theimpact on students' study habits, self-efficacy, and learning outcomes. Students completed a25-item survey regarding knowledge of course content and self-efficacy, at the start and end ofthe course. At the end of each chapter, students were offered the self-assessment quiz, followedby a brief survey on the insights the student gained about their understanding of the material, andimpact on study habits and self-efficacy. This paper presents exploratory analyses examiningstudents' self-assessment quiz usage patterns through the course, quantifying students'engagement with the self-assessment quizzes, and gathering insights into whether students foundthe self-assessment
formative times in their computing education [6, 8]. There have been many attempts at developing novel approaches to support various aspects of programming metacognition, improve self-efficacy, and provide automated feedback and assessment for students in introductory programming courses [5, 6, 8]. Programming metacognition can be broadly defined as how students think about programming and the problem-solving strategies they employ to achieve a goal when given a programming task [9]. However, most of these methods have yet to be successfully scaled and applied in the classroom. Previous studies suffer from issues such as being too small, difficult to validate or replicate, and software that is not shared or is abandoned
the perceived challenges of live streaming as an informal learning opportunity forcomputer science students?Through this work, we aim to understand and evaluate whether or not live streaming impacts anundergraduate student’s perceived self-efficacy in software or game development, RQ1 . Toquantitatively measure self-efficacy, we have adapted questions from Ramalingam andWiedenbeck’s Computer Programming Self-Efficacy Scale and Hiranrat et al.’s surveymeasurements for software development career [41, 42]. As we allow the students to choose theirown projects and set their own goals, we expect there to be some division among the participantson how quickly they believe themselves to be improved based on the gravity of the goals they setfor
] and measured to what extent students felt included,valued and respected. We used this scale with the purpose of exploring students’ sense ofbelongingness, specifically in CS, and modified the items to include “in computing.” Adefinition of computing was also included, “Computing is defined as doing things like making anapp, coding, fixing a computer or mobile device, creating games, making digital music, etc.”Sample questions then asked students to indicate the extent to which they agreed with statementssuch as, “I feel comfortable in computing” and “Compared with most other students at myschool, I know how to do well in computing.”Self-Efficacy: Self-efficacy captures students’ beliefs that they can accomplish designated tasks[38] related to
job seekers. The system, called VirtualInterview (VI)-Ready, offers an immersive role-play of interview scenarios with 3D virtual agentsserving as hiring managers. We applied Bandura’s concept of self-efficacy as we investigated: 1)overall impressions of the system; 2) the impact on students’ job interview preparedness; and 3)how internal perceptions of interview performance may differ from external evaluations by hiringmanagers. In our study, we employed a convergent parallel mixed methods approach.Undergraduate and graduate students (n = 20) underwent virtual job interviews using theplatform, each interacting with one of two different agents (10 were randomly assigned to each).Their interactions were video recorded. Participants then
research exists on theuse of case studies to motivate non-STEM majors to study technological topics, particularly incontexts where hands-on technology activities complement the case study by exploring itsunderlying themes and demonstrating the significance of the technology. In this course, the casestudies serve an additional purpose; they provide real-world examples of the impact of eitherembracing or ignoring a new technology.Self-efficacy refers to the confidence in one’s ability to accomplish specific tasks, and enhancingstudents’ self-efficacy increases the likelihood of achieving desired outcomes [4, 5]. Researchacross various disciplines highlights the critical role of experiential learning in buildingself-efficacy. For example, educators
in Science, Technology, Engineering, and Math (STEM) professions haslong been a problem, especially among minority and female students. According to studies,structural impediments such as a lack of mentorship, limited access to research opportunities,and budgetary restrictions disproportionately affect these populations [1], [2]. To address thesediscrepancies, the ARROWS program at North Carolina A&T State University has taken aholistic strategy that focuses on mentorship, hands-on research, and a supportive academicatmosphere.Mentorship, defined as experienced persons guiding mentees through academic and professionalproblems, has been demonstrated to dramatically increase retention rates [3]. For example, itpromotes self-efficacy
andquantitative measures. Qualitatively, we will assess student engagement and self-efficacy throughLikert-scale surveys. Quantitatively, we will compare task completion times and scores to eval-uate learning outcomes. By automating tree validation and grading, the tool not only enhancesengagement but also improves teaching efficiency.1 IntroductionParse trees, or syntax trees, are essential in computer science education as they represent thehierarchical structure of programming language expressions. They are fundamental in under-standing syntax analysis, compiler construction, and language processing algorithms. However,traditional teaching methods often involve manually constructing syntax trees through static dia-grams or hand-drawn exercises. While
dataset. This dataset incorporated condition-base scaling to account for the six operational modes within the data (Figure 3), as each mode could have its own nominal sensor values and failure points. Studentswere instructed to write a report showing their models’ performance: Figure 4 shows onestudent’s visualization of their RNN model, measuring the predicted RUL value to the test data’sRUL value for five engine units. The model’s performance accounted for 30% of their grade,compared to a baseline linear regression model with no data processing. Figure 4. Final Project RNN Model Performance (From Student’s Final Project)Results of pre and post course surveysA self-efficacy survey was selected as the primary
(1994) usability inspection methods, usability testing will be done throughfocus groups to explore participants’ perceptions of the user interface design, identify designproblems, and uncover areas to improve the user interface and user experience in Ecampus andhybrid courses (RQ1). A heuristics evaluation [16, 17] of the user interface will be conducted toensure that usability principles are followed to provide a user interface with inclusivity andaccessibility (RQ2). A Likert scale will be adapted from Bandura’s (1989) MultidimensionalScales of Perceived Self-Efficacy [18] to explore participants' self-regulatory efficacy (RQ3).Planned InterventionThe proposed study will combine elements of both exploratory and quasi-experimental
introduction to hardware applications. Oncethey have gained facility in the programming language, they then apply this knowledge tohardware applications. In an alternative approach being piloted during this study, students areintroduced to programming and algorithmic thinking via the hardware applications; the material isintroduced concurrently instead of sequentially.Findings from pre and post-surveys indicate that students taught using both approaches had similarimprovements in self-efficacy to code and build projects with basic circuitry. In addition, moststudents appreciated the approach used in their class; if taught with a hardware-first approach, theythought a hardware-first approach provides greater learning, and if taught with a software
Christine Alvarado. 2021. The Relationship Between Sense of Belonging and Student Outcomes in CS1 and Beyond. In Proceedings of the 17th ACM Conference on International Computing Education Research (Virtual Event, USA) (ICER 2021). Association for Computing Machinery, New York, NY, USA, 29–41. https://doi.org/10.1145/3446871.3469748[4] Alex Lishinski and Joshua Rosenberg. 2021. All the Pieces Matter: The Relationship of Momentary Self-efficacy and Affective Experiences with CS1 Achievement and Interest in Computing. In Proceedings of the 17th ACM Conference on International Computing Education Research (Virtual Event, USA) (ICER 2021). Association for Computing Ma- chinery, New York, NY, USA, 252–265. https://doi.org/10.1145
differences in GPA alone. Analysis of students’survey responses shows that real-time feedback and unlimited submission attempts helpedstudents assess their learning progress and motivated them to continuously improve theirsolutions. Instant feedback and unlimited submission attempts were regarded by students aslikely having positively impacted academic integrity in the course. The effect of automatedfeedback and optional assignments on students’ need to visit office hours is explored.Implications for future pedagogical practice and research are discussed.IntroductionTimely and effective feedback provided to students on their submitted work has the potential tosignificantly enhance learning, improve student self-efficacy, reduce drop-out rates, and
experiences and projects are important partsof learning. Later, Kolb, in his Experiential Learning Cycle (KLC) [2], placed large importance onexperiencing and applying/doing as essential elements of optimal learning. Positive experientiallearning from accomplishing successful projects is also emphasized as an important component ofincreasing self-efficacy [3]. Therefore, it is not surprising that KLC implementations were reportedin most of the engineering disciplines like civil engineering [4] – [6], mechanical engineering [6],chemical engineering [4], [5], [7], aeronautical engineering [6], industrial engineering [8], andmanufacturing engineering [4], [5], [9]. Bansal and Kumar [10] describe a state-of-the-art IoTecosystem that includes edge devices
Engineering Education, 2024 Work in Progress: Community College Student Experiences with Interdisciplinary Computing Modules in Introductory Biology and Statistics CoursesAbstractInterdisciplinary professionals with both domain and computing skills are in high demand in ourincreasingly digital workplace. Universities have begun offering interdisciplinary computingdegrees to meet this demand, but many community college students are not provided learningexperiences that foster their self-efficacy in pursuing them. The Applied ProgrammingExperiences (APEX) program aims to address this issue by embedding computing modules intointroductory biology and statistics courses at community colleges. Here, we describe an
’ Sense of Belonging: A Key to Educational Success for AllStudents. (2nd ed.). Routledge, 2018.[5] C. Gillen-O’Neel, “Sense of belonging and student engagement: A daily study of first- andcontinuing-generation college students,” Research in Higher Education, vol. 62, no. 1, pp. 45-71,Feb. 2021.[6] M. Bong and E.M. Skaalvik, “Academic self-concept and self-efficacy: How different arethey really?,” Educational Psychology Review, vol. 15, pp. 1-40, Jan. 2003.[7] D.W. Johnson, R.T. Johnson and K.A. Smith. Active Learning: Cooperation in the CollegeClassroom. Edina, MN: Interaction Book Company, 1991.[8] M.J. Baker, “Collaboration in collaborative learning,” Interaction Studies: Social behaviourand communication in biological and artificial systems
the OR: exploring use of augmented reality to support endoscopic surgery,” in Proceedings of the 2022 ACM International Conference on Interactive Media Experiences, in IMX ’22. New York, NY, USA: Association for Computing Machinery, 2022, pp. 267–270. doi: 10.1145/3505284.3532970.[30] T. Khan et al., “Understanding Effects of Visual Feedback Delay in AR on Fine Motor Surgical Tasks,” IEEE Transactions on Visualization and Computer Graphics, vol. 29, no. 11, pp. 4697–4707, Nov. 2023, doi: 10.1109/TVCG.2023.3320214.[31] M. Menekse, S. Anwar, and S. Purzer, “Self-Efficacy and Mobile Learning Technologies: A Case Study of CourseMIRROR,” in Self-Efficacy in Instructional Technology Contexts, C. B. Hodges, Ed., Cham
educationalsettings," Journal of Applied Psychology, vol. 28, no. 3, pp. 211-224, 2022.[13] B. Cook-Chennault and V. Villanueva, "Student anxiety in competitive educational games,"Educational Psychology Review, vol. 42, no. 1, pp. 83-95, 2020.[14] A. Cook-Chennault and V. Villanueva, "Inclusive game design in engineering education,"Journal of Diversity in Higher Education, vol. 19, no. 2, pp. 105-118, 2020.[15] R. M. Marra, K. A. Rodgers, D. Shen, and B. Bogue, “Women Engineering Students and Self‐Efficacy: A Multi‐Year, Multi‐Institution Study of Women Engineering Student Self‐Efficacy,” J.of Engineering Edu., vol. 98, no. 1, pp. 27–38, Jan. 2009, doi: 10.1002/j.2168-9830.2009.tb01003.x.[16] M. A. Hutchison, D. K. Follman, M. Sumpter, and G. M. Bodner
scenarios to understand aconcept or relationship. The tool measures the students’ self-efficacy beliefs with respect to theirknowledge gained from using the tool, and objectively measures their understanding of theconcepts as well as their confidence in their understanding.The Methods section details the study instruments and the software tools developed. The Resultssection provides details on the recorded differences in student learning attainment as measuredby student performance on the interactive posttest. Multiple factors affecting studentperformance including time spent exploring the software tool and interface type (continuous vsdiscrete) were explored. The new direct metric of student interaction time combined with theincreased sample size
University of Florida (UF). Her research focuses on self-efficacy and critical mentorship in engineering and computing. She is passionate about broadening participation and leveraging evidence-based approaches to improve the engineering education environment for minoritized individuals.Victor PerezSTEPHANIE KILLINGSWORTH, University of Florida ©American Society for Engineering Education, 2025 WIP: One Teacher’s Experience Adapting an Innovative, Flexible Computer Vision Curriculum in a Middle School Science ClassroomIntroductionArtificial intelligence (AI) is predicted to be one of the most disruptive technologies in the 21stcentury [1], and to prepare all young people to live and work in an AI
teachers in rural areas. It measures teachers’ perceptions about rural life, activities and behaviors as well as relationships with persons in the rural community.The RIS showed an acceptable internal reliability of α = 0.72−0.83 which boasts of its effectiveness in capturing rural identity. The teacher mindset survey, carved out of [47] and [48], was avital instrument in supplying the valuable insights into diverse aspects of teachers’ mindsets. It measures parameters such as concerns on social comparison, self-efficacy, comfort being oneself, measurement of task value, as well as the perceived costs of participating in the training program. Each survey item were measured on a 5-point Likert scale, with 1 being“strongly disagree” to 5 being
had on programming labs’ completion. Such analysis may compare courses where hints were provided and courses where hints were not provided for the same problems, including controls for other confounds, such as different instructors, course offerings, student demographics, and more. Future work may also evaluate student self-efficacy, including a student's belief that the hint system impacted that student's self-efficacy.Conclusion dvanced zyLabs includes many powerful features, for students and instructors, includingAindustry-standard IDEs, highly-customizable development environment and tools, Linux machine’s desktop, collaborative environments, and more. Nonetheless, each metric of student usage was about the
critical feedback can play a key role in motivation to continue learning. Onestudy [37] on critical constructive feedback (CCF) with student control showed that for low-achieving students, the presence of a TA led them to choose CCF more often and neglect CCFresults less. This was in a game-based AI agent-supported learning environment for history lessonsthat gave students ownership of the communication with the AI agent. Another game-based studydiscussed how the self-efficacy (SE) of a tutee agent has an impact on students’ performance [38].However, the conversation was scripted, the questions were multiple-choice, and feedback wascommunicated through a chat window. In view of their results, the authors recommended not onlydesigning more SE into
, 30, 34]. Hence, the majority of stud-ies reporting benefits of LLMs focus more on student engagement, interaction patterns, and be-haviors, or student perceptions, such as satisfaction, perceived benefit, self-efficacy, or motiva-tion [33, 37, 25, 39, 40, 41, 26, 42, 7, 43, 27, 44, 45, 46].We discuss the relevance of this work at further length in Section 5 but note here that our studydiffers significantly in context, as our tasks are not assessing programming ability specifically,but broader knowledge and problem-solving skills related to computer engineering and embeddedsystems.3 MethodsTo test the potential impact of LLMs in SRL, we designed a 2-stage study consisting of a coun-terbalanced repeated measures experiment, and a
Paper ID #37589Active Project: Supporting Young Children’s Computational ThinkingSkills Using a Mixed-Reality EnvironmentDr. Jaejin Hwang, Northern Illinois University Dr. Jaejin Hwang, is an Associate Professor of Industrial and Systems Engineering at NIU. His expertise lies in physical ergonomics and occupational biomechanics and exposure assessment. His representative works include the design of VR/AR user interfaces to minimize the physical and cognitive demands of users. He specializes in the measurements of bodily movement as well as muscle activity and intensity to assess the responses to physical and environmental
3.95 0.75 -0.27 0.08 15 3.3 4.12 0.83 -0.2 0.04 16 2.7 4.25 1.55 0.53 0.28 17 3.15 4.02 0.87 -0.15 0.02 M: 1.02 SS: 0.44 M: mean of the difference between the two surveys (Post-Pre) SS: sum of squares of deviations3.2.2 Descriptive Results of Survey QuestionsThis section provides a detailed descriptive results of the survey questions.Q3 - Self-Efficacy in Problem Solving: Rate your confidence in solving programming
generative artificial intelligence and learning achievement: serial mediating roles of self-efficacy and cognitive engagement," Frontiers in Psychology, vol. 14, 2023.[13] R. Michel-Villarreal, E. Vilalta-Perdomo, D. E. Salinas-Navarro, R. Thierry-Aguilera and F. S. Gerardou, "Challenges and Opportunities of Generative AI for Higher Education as Explained by ChatGPT," Education Sciences, vol. 13, no. 9, p. 856, 2023.[14] Y. Walter, "Embracing the future of Artificial Intelligence in the classroom: the relevance of AI literacy, prompt engineering, and critical thinking in modern education," International Journal of Educational Technology in Higher Education, vol. 21, no. 15, 2024.[15] X. Zhai, "ChatGPT User Experience
the curriculuminclude anxiety [9], self-efficacy [10], attitude, perceived ease of use/technology acceptance [11]and perceived usefulness. Furthermore, there is evidence that suggests that as the number ofinstructional technologies available at institutions grow, faculty are less likely to use them [12]due to lack of interest/capacity to use the tool, self-efficacy and personal ideals in pedagogy.Trouble points in utilization include underestimating the complexities of using any newtechnology including formulation of instructor comfortability and knowledge as well as the timerequired to deliver courses using different technology platforms [13-15].Schroeder [16] recently projected a short-term vision of AI in higher education, including
practice examples to build their self-efficacy, while those who are highly motivated maybenefit from more challenging tasks to maintain their engagement. Furthermore, linguistic diver-sity must also be acknowledged, considering language preferences. Non-native English speak-ers may require additional language support to comprehend complex texts. The ideal technologywould be able to comprehend these conditions, interpret the knowledge and provide personalizedand context-aware explanations similar to a human instructor. This level of adaptability wouldsignificantly enhance the learning experience, making it more engaging, effective, and tailored toindividual students’ needs.In recent years, advances in artificial intelligence (AI), machine learning