essential competency for undergraduate students. However, current conceptual frameworksoften neglect the intersection between gender and disciplinary background, particularly regardinghow female students in Science, Technology, Engineering and Mathematics (STEM) andnon-STEM fields engage with AI. This review synthesizes existing literature on AI literacy,highlighting the distinct challenges and strengths of female undergraduates across disciplines.We propose a Tri-Pillar Integrative Framework—Technical Competency, Ethical Reasoning, andContextual Application—to foster comprehensive AI literacy. The paper underscores thefragmented nature of current AI education, emphasizing gender-specific barriers such asstereotype threat and techno-anxiety, and
that prioritize transparency, personalization, and ethical safeguards.This study contributes evidence-based insights to guide educators, developers, and policymakersin ensuring the ethical and effective adoption of AI in education.Keywords: Generative AI, ChatGPT, perception, TAM, adoption, education, ethicsIntroductionArtificial Intelligence (AI) has emerged as a groundbreaking technology across industries, and itspotential in education is equally promising. It has reshaped how processes are conducted, decisionsare made, and interactions are facilitated. AI has demonstrated immense educational potential torevolutionize traditional pedagogies, enhance administrative efficiency, and improve personalizedlearning experiences. The capacity of AI
can be used in education in a creative and ethical way.Sidney Katherine Uy Tesy, Texas A&M University Sidney Katherine Uy Tesy is a second-year student at Texas A&M University’s College of Arts and Sciences, where she is pursuing a degree in Philosophy and Sociology (BA) and a minor in Psychology. She is a recipient of an Undergraduate Glasscock Scholarship, which has allowed her to engage in qualitative research on digital ethics, mobile apps, and social stigma, working alongside one her faculty mentors. Her research interests focuses on the intersection of technology and social institutions that concern education and legal systems.Dr. Kristi J. Shryock, Texas A&M University Dr. Kristi J. Shryock
synthesize information. These students learned how to thinkcritically about the questions they need to ask to lead them to the answer they needed [5].Additionally, AI tools can be used to check answers and equations to provide a deeperunderstanding on complex engineering topics [6].The integration of AI in engineering education also presents challenges. Students may developand overreliance on AI tools and AI may negatively impact academic integrity [7]. Furthermore,concerns surrounding the ethical implications of AI include issues of bias, privacy, and inabilityto validate AI-generated information highlight the need for comprehensive training on theresponsible use of AI [8][5]. Educators must consider these challenges to ensure that AI tools
pivotal moment inAI adoption driving rapid transformation across many fields. For higher education, the newtechnological wave demands a reevaluation of traditional teaching and learning models to remainapplicable in an AI-driven world [2]. Higher education institutions now face a monumental taskof embracing AI literacy as a core competency, akin to such fundamental competencies ascritical thinking and effective communication. However, integrating AI into higher educationpresents several challenges, including the lack of standardized guidelines for curricularintegration and established governance structures, ethical and safety concerns, facultypreparedness, quality and reliability of outputs, and the potential of increasing the digital divideand
, generative AI offers transformative opportunities, from fostering creativity inproblem-solving to streamlining instructional design. However, these advancements also present challenges,including ethical considerations, reliability concerns, and the risk of over-reliance on AI systems. To addressthese complexities and maximize the potential of generative AI, it is essential to explore how these tools arebeing implemented, the challenges they pose, and their implications for students and educators.This study conducts a scoping review to systematically examine the applications and innovations of generativeAI in engineering education. By employing the five-step framework proposed by [2] this review seeks toprovide critical insights into the current
focuses on human-computer interaction, human-AI interaction, and social and collaborative computing. Since 2023, Dr. Smith has been continuously involved in efforts to assess and understand student adoption of Generative AI (GenAI) across campus. She participated in writing institution-wide policies for Mines, and she has given numerous guest lectures and organized numerous workshops on the ethics and use of GenAI in engineering education. ©American Society for Engineering Education, 2025 Assessing Student Adoption of Generative Artificial Intelligence across Engineering Education from 2023 to 2024AbstractGenerative Artificial Intelligence (GenAI) tools and models have the
critical exercises where students compare different platforms to determine suitabilityfor specific tasks, promoting a discussion on data ethics, privacy, and academic honesty. Topromote further implications for practice, the study showcases opportunities for reflection, bothas individual users and in groups through using Socratic Dialogue, as faculty and students testthe limitations of different platforms and address the ethics of using GenAI in a world thatincreasingly blurs the lines pertaining to Cyberethics.Keywords: Generative AI, Pedagogical Innovation, AI Usability Spectrum, Bloom’s RevisedTaxonomy, CyberethicsBackgroundWhen ChatGPT was released on November 30, 2022, it amassed a historic one million users inits first five days [1], with
[20], [7], [8], [17], 9 Civic engagement [14] Collaboration and [19], [14] Civic and ethical social development Collaborative exploration [18], [19], [14] engagement Respect for shared spaces [15], [14], Social skill development [7], [8], [14] 10 Sensory stimulation [21], [17] Leadership development [8] Sensory and Positive technological [13], [10][11], [14] emotional development Community
theaccuracy of the returns.Methodologies and Limitations for Engineering Students using AIThe integration of artificial intelligence (AI) in the field of academic research has acceleratedinnovative changes in pedagogical methods, student outcomes, and ethical issues depending onthe discipline. Despite the numerous frameworks suggested for the integration of AI, there are notenough resources targeted at engineering and engineering technology students. This sectionincludes a review of seven frameworks that help students use AI tools in research and arepresented in a framework. While these frameworks are useful for understanding the technical,ethical, and pedagogical aspects, none fully describes the specific engineering domains’ needs.Therefore, we
information, loss of critical thinking skills, and thepotential development of overreliance. Additional concerns emerged regarding ethical considera-tions such as data privacy, system bias, environmental impact, and preservation of human elementsin education.While student perceptions align with previously discussed benefits of AI in education, they showheightened concerns about distinguishing between human and AI-generated work, alongside ethi-cal issues of data privacy, system bias, and environmental impact. The findings suggest importantconsiderations for implementing AI chatbots in educational settings. To address students’ concernsregarding academic integrity and information reliability, institutions can establish clear policies re-garding AI use
intelligence (AI) into higher education has acceleratedsignificantly over the past decade, with AI increasingly being leveraged to personalizelearning experiences, streamline administrative processes, and enhance data-drivendecision-making. Despite this rapid expansion, there remain considerable challenges andgaps in knowledge regarding the effective and ethical implementation of AI technologiesin educational settings. Many institutions continue to grapple with issues related to dataprivacy, algorithmic bias, and the broader implications of AI on both teaching andadministrative practices. This work in progress seeks to explore the perspectives andexperiences of key stakeholders, specifically faculty and academic management staff,concerning the
addresses the integration of artificial intelligence (AI) topics intointroductory engineering courses. With the proliferation of AI in everyday life, it is important tointroduce the topic early in the engineering curriculum. This paper focuses on generative AI andmachine learning topics using two different educational strategies. The objective of this researchwas to explore students’ comprehension of AI and their motivation to engage in AI learning afterbeing introduced to AI tools.In a first-semester project engineering course, generative AI was introduced as a tool. Studentswere guided on the ethical and effective use of generative AI and were encouraged to discuss itslimitations. Students had the option to use generative AI for their writing
academiccommunity. There is ongoing debate about whether faculty should teach students how to use GAItools, restrict their usage to maintain academic integrity, or establish regulatory guidelines forsustained integration into higher education. Unfortunately, limited research exists beyondsurface-level policies and educator opinions regarding GAI, and its full impact on studentlearning remains largely unknown. Therefore, understanding students' perceptions and how theyuse GAI is crucial to ensuring its effective and ethical integration into higher education. As GAIcontinues to disrupt traditional educational paradigms, this study seeks to explore how studentsperceive its influence on their learning and problem-solving.As part of a larger mixed-methods study
suggests that while GenAI tools can improve problem-solving and technical efficiency, engineering education must also address ethical, human-centered, and societal impacts. The dVC framework pro- vides a structured lens for assessing how GenAI tools are integrated into curricula and research, encouraging a more holistic, reflective approach. Ultimately, this paper aims to provoke dialogue on the future of engineering education and to challenge the prevail- ing assumption that technical skill development alone is sufficient in an AI-mediated world.1 IntroductionWe take as our starting premise that engineers have a responsibility to society, and conse-quently, that engineering educators have a responsibility to convey
a growth in academic integrityfilings since the advent of ChatGPT. In fact, [2] points to a Stanford University survey where1/6th of students said they had used ChatGPT on assignments or exams. This article [2] alsopoints towards the issues of hallucinations, where AI focuses on generating text that sounds goodbut may not be scientifically accurate. However, [1] also points to potential efficiencies andutility of AI in higher education, such as teaching ethical use of AI, growth of tutoring/teachingassistants and for operational efficiencies. Auon [3] discussed the impact of AI on the humanexperience in physical (personalized medicine/drug delivery and disease identification),cognitive (increased workplace productivity, focused effort on
. Feedback was used torefine the user interface and improve the responsiveness of the speech-to-text engine, ensuring a seamless interaction between the child’sspeech and the application’s output. The application is ready to be testedin real- 3In world classrooms or therapy settings, approval for ethics is pending.With the speech-to-text technique incorporated into AR, possibilities tomake timely responses in a format that will be engaging and, at the same,engaging children more often and with more passion in speech therapysessions. This paper will seek to fill this gap by developing an ARapplication tailored to support speech therapy to build on the benefitsalready proven in
onlineeducation during the COVID-19 epidemic, emphasizing the difficulties in preserving the integrity ofassessments, the quick changes in educational methods, and the growing dependence on technology.Their results support the necessity of creative approaches to academic integrity in online settings.Online learning concerns: Toprak et al. [3] highlight that enforcing academic integrity in onlinelearning environments is more challenging due to ethical concerns investigate differences in howstudents and teachers view privacy and the application of rules. According to their research, 78% ofstudents prefer moderate punishments for misbehavior, but 52% of teachers support harsherpunishments. Despite these disagreements, both sides agreed that it is critical
ethical uses of LLMs, which included helping to understand concepts,correcting grammar, and creating citations, among others. When pressed, students revealedstress, running out of time, and failing to find the answer for themselves pushed them to usingLLMs in ways that may seem unethical [4].In a computer science course, LLMs can be used to both generate code and help a studentunderstand it [5]. Depending on how the LLM is being leveraged, it could be perceived as abenefit or risk to the student [6]. During their first year, many computer science students learnthe fundamentals of programming, which serves as a critical foundation for their future computerscience courses. However, as they encounter difficult programming challenges on a
critical aspects necessary to create virtual worlds that are engaging, inclusive,and developmentally appropriate for young children. These elements are: Engagement andMotivation (EM), Collaboration and Teamwork (CT), Creativity and Problem-Solving (CPS),Communication and Interaction (CI), Inclusivity, Accessibility, and Age-Appropriate (IAA),Design and Environment (DE), Data Security and Privacy (DSP), Safety and TechnicalSecurity (STS), Evaluation and Feedback (MEF), Cultural Responsiveness (CRR), CommunityBuilding (CB), Facilitation and Educator Tools (FET), Ethics, Empathy, and Decision-Making(EDM) [4]. This paper uses these elements to develop the virtual world environment in Roblox.Table. 1 Elements for Virtual World [4]. The VW integrates
by incorporating social justice, ethics,problem definitions, and professional development considerations, many still rely on technically-focused a developed during the Cold War ([28], as cited in [24]). Robinson [3] analyzedengineering textbooks’ approaches to teaching electrical circuits over about 80 years (1940-2017), focusing on how they present and understand engineering knowledge. Although morerecent textbooks included brief “real-world” applications at the beginnings and ends of chapters,they primarily concentrated on mathematical analysis, problem-solving, and technical details,minimizing theoretical explanations. By contrast, earlier textbooks contained more detailedwritten explanations, emphasized theoretical understanding, and
, researchassistance, automated grading, writing coach, make lesson plans, help to make progressreports, also helping the teachers how to teach a subject [76], [77], [78]. Although GenAI is apowerful technology in education, it still needs to be used with extra caution to ensure usingit safely and responsibly. For example, in [70], the article discusses the application ofArtificial Intelligence in online learning and distance education, based on a systematic reviewof empirical studies. The application of AI in these settings has been shown to enhance thelearning experience by personalizing the content, facilitating peer interaction, and providingreal-time feedback. Nevertheless, it also warns of the ethical and legal implications ofwidespread AI use in
syllabi, how manyaddress knowledge unit XXX?” This experiment was conducted by providing up to six individualsyllabi simultaneously (limited by the platforms and their associated context windows). A secondversion of this experiment was conducted by providing a single combined PDF document, whichincluded all 16 syllabi. This document was optimized and text-recognized using Adobe Acrobatto assist with readability by the LLM. The authors used the Policy, Legal, Ethics, and Compliance(PLE) knowledge unit, which was known to be unique to one specific syllabus, where many of theothers could have been generalized. This selection was made to help assess the accuracy of theevaluation. For ease of identification, the single combined document experiment
technologies toincrease efficiency in their work [29]. Ethical cautions of using AI were prevalent in the literature [19], [20], [23]. Thesecautions involved not only students’ ethical use of AI, privacy concerns, academic quality,quality of the results generated and legal considerations, but a focus on needs for future policy,ethical review, and monitoring in evaluation of AI-generated content. Therefore, cautions shouldbe held at the forefront of future research in engineering education and in the skills developmentof future engineers.DiscussionRQ1: How have engineering educators used generative artificial intelligence (AI) tools toenhance students' proficiency of industry professional skills? As the implementation of AI in engineering
: Applied behavior 3. Attitudes: Feelings and beliefsFigure 7 presents a word cloud summary of terms associated with competencies used forIntegrated Engineering (a) and a bar graph summary of the dimensions adapted from [2], [3]across the 19 reviewed studies (b). The textual overview of competencies employed (Figure 7a)may suggest the socio-technical-cultural emphasis of competency explanations and how they aredefined with holistic terms such as professional, global, ethical, etc. The codification overview ofcompetencies employed (Figure 7 b) suggests that the most frequent trends are: • 1. Knowledge: Understanding and familiarity with information 2. Skills: Applied behavior and 3. Attitudes: Feelings and beliefs are studied the most
student queries and identify usage patterns across courses.For RQ4, we compare student prompts with course syllabi and the university’s student code toidentify and characterize instances of potential policy violations. We use natural languageprocessing (NLP) techniques to classify question types and patterns. This mixed-methodapproach will provide a comprehensive understanding of how students interact with the systemand how it supports their learning. This study aims to provide insights into the role of AI-drivensystems like AI-bot by investigating the different types of questions and their relevance insupporting student learning, while also addressing potential challenges and ethical considerationsin their use.2 Related WorkResearchers have
1, highlight that the bestperformance was achieved with Adam optimizers for 100 epochs. The comparison in Figure 2further confirms that our hybrid model significantly outperformed standalone traditional modelsin terms of classification accuracy.Even though results show how effective our approach is, there are significant ethical concernsraised by using AI to predict students' academic performance, especially with regard to bias andfairness. The OULAD dataset might have inherent biases related to demographics,socioeconomic status, or institutional regulations because it is based on real student records.Machine learning models run the risk of sustaining current educational disparities if these biasesare not addressed properly. A major concern
as they work on programmingtasks. Students with extended error resolution times are perceived to display strugglingbehaviors. By tracking the duration and frequency of error corrections, instructors can gaininsight into students’ debugging strategies.Furthermore, by integrating unit tests with the keystroke analysis, the tool enables theinstructors to dynamically assess code correctness. The pass/fail rates of the unit tests areclear measures of students’ progress.5 Ethical ConsiderationsGiven the focus of this research on student data collection and analysis, the study adheres toestablished ethical guidelines in order to protect the students’ privacy and maintain datasecurity. This research has been approved by our University’s
ethical implications and societal impacts ensures they areprepared to develop responsible and sustainable solutions. With the increasing reliance ontechnology and the internet, protecting sensitive information from cyber threats has become a toppriority for individuals, businesses, and governments alike, and incorporating AI/ML into theprogram empowers students to become future leaders who drive progress in an increasingly digitalworld, with a strong emphasis on the critical field of cybersecurity. Approaching this need to fuseAI/ML in our cybersecurity curriculum starts by identifying the key applications of AI/ML incybersecurity. Once these are identified, we can determine the freshman, sophomore, and juniorcourses that can prepare the students
, the teaching communityhas raised substantial concerns regarding academic integrity, student learning, ethical application, 1and the dynamics of human-AI interaction [4, 5, 6, 7, 8]. While empirical studies on LLM usage ineducation have been conducted in this early stage of adoption, given the current novelty of LLMsin education and the myriad ways they might be incorporated into an educational setting, additionalresearch is crucial for better understanding the short-term and long-term effects of LLM-based AIon teaching and learning in computer science.Due to the relative lack of evidence from early research in this area, we believe the immediate ef-fects of using generative AI in classroom