Paper ID #48016PEER HELPER (Peer Engagement for Effective Reflection, Holistic EngineeringLearning, Planning, and Encouraging Reflection) Automated Discourse AnalysisFrameworkyilin zhang, University of FloridaDr. Bruce F. Carroll, University of Florida Dr. Carroll is an Associate Professor of Mechanical and Aerospace Engineering at the University of Florida. He holds an affiliate appointment in Engineering Education. His research interests include engineering identity, self-efficacy, and matriculation of Latine/x/a/o students to graduate school. He works with survey methods and overlaps with machine learning using
Paper ID #46681Future-Ready Students: Validating the Use of Natural Language Processingto Analyze Student Reflections on a Remote Learning Group ProjectMajd Khalaf, Norwich University Majd Khalaf recently graduated from Norwich University with a Bachelor’s degree in Electrical and Computer Engineering, along with minors in Mathematics and Computer Science. He is passionate about DevOps, embedded systems, and machine learning. Throughout his academic career, Majd contributed to various projects and research in natural language processing (NLP) and computer vision. He served as a Senior AI Researcher at Norwich University’s
study’s objective to align curricula with jobmarket requirements. A reflective approach acknowledges inherent biases and strives for abalanced, insightful study [3].3.2 Job Listings Data Sources and CollectionA total of 106,018 electrical engineering job postings were collected from five prominent U.S. jobportals: LinkedIn, Indeed, Glassdoor, CareerBuilder, and SimplyHired. These platforms wereselected for their broad reach and substantial volume of job advertisements, ensuring a diverse andrepresentative dataset. A custom Python script was developed to automate the extraction of jobtitles, company names, and job descriptions based on the search parameters "Electrical Engineer"and "Electrical Engineering," identified through a preliminary review
, and correctness. Each criterion is assessed using a distinct set of evaluationguidelines. After individual scores for each category were assigned and agreed upon, the totalscore was calculated by summing the scores across all three categories. On the other hand,traditional scoring of concept maps evaluates a student’s understanding based on structuralcomponents such as the number of concepts, hierarchy levels, and cross-links 1 . While the scoringmethod emphasized validating the correctness of connections and hierarchical relationships, thisstep is often omitted to save time and ensure efficient, reproducible assessments. Key metricsinclude (i) knowledge breadth, measured by the number of concepts (NC), (ii) knowledge depth,reflected by the
data reflects student engagement by analyzing historical data from a learningmanagement system (LMS) alongside observations of class schedules. Online activity wascompared to semester timelines and qualitative codes to identify patterns of alignment. Thefindings suggest that accurate measurement of engagement requires the integration of both LMSdata and contextual classroom information. In Case Study 2, we explored how learning analyticsinfluences pedagogical change through surveys and interviews with instructors. Instructorsgenerally found static data related to enrollment and academic standing more useful thandynamic data capturing students’ online behaviors. The difficulty in translating data intoactionable pedagogical strategies rendered
identifies students who may be lagging in their action plans, en-abling the Electrical Engineering Department to provide targeted interventions and resources.These measures aim to foster higher levels of ambition and task completion, ultimately sup-porting students in their professional development. 0 This material is based upon work supported by the National Science Foundation under Grant No. 2022299INTRODUCTIONThe preparation of engineering students for professional careers requires a robust frameworkthat integrates academic performance with experiential learning. The evolution of engineer-ing programs in the U.S., including Electrical Engineering (EE),has historically reflected ashift from hands-on, industry-focused training toward serving
tocater to individuals at any level and ensures participants learn something new regardless of theirbackground. Methodology Retrospective analysis of weekly reflective blog posts and a thirty-minute interview withthe elementary teacher after the program served as our primary data sources to help usunderstand the teacher’s experience in the program and how the teacher integrated machine-learning concepts into 3rd to 5th-grade classrooms. The weekly blog posts provided valuableinsights into the teacher’s thoughts, challenges, and growth throughout the program, and offereda detailed, ongoing account of how she engaged with the material and the ways in which sheprocessed her learning. For data
open-ended feedback to identify team leaders or pinpoint students inneed of additional support. Meanwhile, robust prompt design allows instructors or researchers totailor LLMs for specific instructional goals, though the field continues to refine best practices inprompt-engineering [9].Within higher education, peer evaluation and feedback play critical roles in developing students’teamwork abilities and self-reflection skills. Tools such as CATME (Comprehensive Assessmentof Team Member Effectiveness) facilitate structured peer rating and feedback, ensuring that eachteam member’s contributions are accounted for [11][12]. However, while numeric ratings givebroad insight into performance, the sheer volume of qualitative comments can
implementation before taking the class? (RQ2) How do these perceptionsdevelop or change by the end of the AI course?.2 Prior WorkPrevious scholars have developed curricula focused specifically on ethics in AI [15] or have adoptedan integrated approach, examining societal implications of algorithms alongside their technicalstructure and applications [16, 17, 18]. Courses that have introduced societal implications of AIto primary and middle school students have used lectures where instructors discuss risks associ-ated with AI and corresponding mitigation methods [17] and project-based learning where studentsimplement algorithms while reflecting on their societal implications [16]. Standalone courses onethical and responsible AI have centered around
{ahslim@arizona.edu, heileman@arizona.edu, akbarsharifi@arizona.edu, roxanaa@arizona.edu, kmanasil@arizona.edu} The University of ArizonaAbstractGraduation rates are critical performance metrics for higher education institutions, reflecting stu-dent success and the effectiveness of educational programs. Among various factors, the complex-ity of university curricula, measured by prerequisite course sequences, total credit requirements,and course flexibility within degree programs, significantly influences outcomes such as timelygraduation and retention rates. Previous studies analyzing these effects often lack a unified frame-work to address how factors such as gender, academic preparation, and
personal abilities (Ownership), define cleargoals and actionable steps (Wisdom), habitually advance toward these goals while reflecting onprogress (Execution), and self-regulate while accessing supportive resources (Resilience) [19].Building on insights from the pilot program that the developers completed, the following are thekey features of the POWER platform: 1. Non-Directive Coaching: Facilitates self-discovery by asking questions rather than giving direct advice, encouraging students to take control of their learning and decisions. 2. Personalized Interactions: Customizes conversations per student, providing guidance that aligns with each individual's unique situation and goals. 3. Goal Setting and Tracking: Aids
interaction data alonefail to explain the underlying reasons for student behavior. The varied experiences of studentsfurther complicate the establishment of clear patterns, emphasizing the need for additionalcontextual insights. Institutions adopting LA frequently encounter capability-related challenges,reflecting a growing need for expertise in evaluating technology during early adoption stages 6 .Access to analytics data alone is not enough, effective interpretation of the data is essential forcreating learning environments that actively engage students and improve outcomes. Althoughlearning analytics dashboards (LADs) have demonstrated potential in fostering engagement andinteraction in online learning, their ability to significantly improve
from 14,990 in 2000 to 51,338 in 2019, a 242% increase overtwo decades. Similarly, the number of graduates with a doctorate has grown from 779 to 2790 inthe same period, an increase of 258%. While this increase in pursuits of postgraduate degrees inthe field reflects the rapid growth of the industry, universities still grapple with the task ofevaluating increasingly large volumes of applications.Several large universities adopt a holistic review approach for admissions that is time-consumingand relies heavily on skilled human reviewers. The average time taken for each full review couldvary between 10-30 minutes based on the skills of the reviewer [3]. A survey conducted byIntelligent in 2023, an education magazine [4], reported that 50% of 400
learning objectives.Both courses, with a combined enrollment of 650 students, reflect large class sizes, catered to adiverse student population primarily consisting of junior-level undergraduates majoring incomputer science or related disciplines. The courses were delivered in a hybrid format, offeringstudents access to both in-person lectures and recorded sessions. This diverse student body andflexible delivery format provided a comprehensive testing ground for evaluating theeffectiveness and accuracy of microlearning materials.Microlearning materials, including interactive quizzes, digital flashcards, mini-lessons, andscenario-based exercises, were integrated into the coursework for both classes. However, thefrequency of microlearning
6Curriculum Choice: For all chosen data science programs, we chose the syllabi from core datascience curriculum for our content analysis. Core curriculum is determined by whether it coversthe top competencies identified by previous study [18] and listed in above Table III.Tables VII and VIII present the number of core data science courses selected for this study,organized by country and by institution, respectively. Both tables demonstrate that the Chineseand U.S. program samples are comparable in terms of competency coverage and the balancebetween required and elective courses. Table VIII further highlights variations betweeninstitutions, which may reflect either broader curricular options or differences in syllabusavailability. In this study, “core
open challenge 6 . There have been effortssuch as the Data Science Corps: Wrangle-Analyze-Visualize (DSC-WAV) and the Attitude,Skills, Communication, Collaboration, and Reflection (ASCCR) that have tried to teach studentshow to collaborate but often do not focus exclusively on teaching the social skills necessary forsuccess in collaboration. Thus, this work seeks to contribute an approach for teaching CPS to datascience students. We’ve developed a module for teaching CPS that allow students to learn andapply their skills in a mock data science project. This work is grounded in well-establishedframeworks for CPS and follows a simulation-based approach to teaching these skills.Although several existing frameworks provide a foundation for
universities in the United States [8]. By evaluating theseinitiatives using pre- and post-surveys, and participant reflections, the study provides actionableinsights for designing equitable AI literacy resources [9], [10]. Studies like this have the potentialto influence engineering education policies, bridge access gaps, and equip students and facultywith the skills needed to navigate the digital-intelligence transition [2], [11]. Additionally, thisstudy contributes to the literature on the important role that professional development plays in thedevelopment of AI literacy skills in students. The following evaluation questions (EQ) were askedto assess the impact of the workshop: EQ1 (Quantitative Question) Do students' perceived AI ethic
productivity,has also been the focus of discussion. The H-index is often discussed both for its ability toindicate productivity and serve as a point of comparison between an institution’s departmentsor individual researchers [4], [5], [6]. While its importance in assessing research units isrecognized, there is broad agreement that the metric could be refined to better reflect thecomplexities of research impact. Alongside the analysis of scholarly metadata, significant attention has also been givento institutional collaboration. Collaboration among researchers, universities, industries, andinstitutions can influence productivity, with its effectiveness shaped by factors like partnershiptype, proximity, and academic discipline [7], [8]. For example, a
test accuracy of 91.92%. As shown in Figures 3aand 3b, training and validation trends converged smoothly over 50 epochs, with minimaloverfitting. Validation accuracy stabilized near the test accuracy, while losses decreased steadily,reflecting strong generalization capabilities.The classification report (Table 3) highlights the model's reliability, with a precision of 97% andan F1-score of 95% for high-performing students, and a recall of 87% for low-performingstudents. Overall, the macro F1-score of 87% and weighted F1-score of 92% demonstrate itsbalanced performance. Predicted grades closely matched actual grades in regression tasks, withminor deviations (Table 4). SHAP analysis further validated the model by identifying priorgrades (G1, G2
humanchoice of CSTA standard in about half of instances, suggesting that the outcome of human versusLLM coding is quite different. And yet it is unlikely that those mismatches always reflect LLMerror: it is also possible that sometimes the human coder made a suboptimal choice, especiallywith such a lengthy and complex task. (The humans needed to decide among 120 CSTA standardswhen choosing the standard most closely related to each state standard.) In other words, for acomplex and occasionally subjective task such as this, we cannot say with confidence that themismatches always reflect LLM errors. At the same time, there is evidence that the LLM choicewas at least sometimes a clear error (e.g., see Table 6). And the pattern of deeming
focused intervention strate-gies.Keywords: progress analytics, student success, student outcomes, learning analytics, program cur-riculum, graduation rates, educational data miningIntroductionWhile the number of students successfully completing their degrees has steadily increased sincethe beginning of the century,1 many students face new challenges that reflect a growing array ofacademic, financial, and personal obstacles.2 The traditional graduation timeline often proves dif-ficult to achieve due to factors such as credit misalignment, insufficient support systems, financialhardships, and competing personal responsibilities. For many students, these challenges compoundover time, creating barriers to degree completion that extend well beyond
, suggesting that shorter,focused content enhances memory retention and helps maintain attention on specific learningtasks [20]. A systematic review and meta-analysis [11] demonstrated that microlearningsignificantly improves academic performance in higher education compared to traditional macro-learning approaches [1]. The study attributes this improvement to reduced cognitive load,flexible learning environments, promotion of ‘self-directed learning, and timely feedback.The widespread popularity of platforms such as YouTube and TikTok underscores theeffectiveness of delivering bite-sized content, reflecting a growing preference for concise andaccessible information dissemination. TikTok, in particular, has been studied within theframework of
structured nature of research activities and deadlines. • Technical Skills: All participants indicated enhanced technical skills, particularly in data analysis and AI methodologies. Alumni highlighted how these skills directly and indirectly contributed to their professional success. • Self-Confidence: All respondents experienced an increase in self-confidence, especially in presenting findings to varied audiences, including academic and industry professionals. • Independence: 83% noted greater independence in tackling complex problems, reflecting the problem-solving and critical thinking skills developed during the projects. • Qualitative Feedback: Respondents expressed appreciation for the real-world
generally high, with a median of3.88.Figure 2: Trends in the number of Figure 3a: Average Annual User Figure 3b: Average Annual User Apps Released Ratings for Released Apps Ratings based on Review CountFigure 2, Figure 3a and Figure 3b address the previously mentioned RQ1. As shown in Figure 2,the number of mental health app releases has significantly increased since 2009, peaking in2023.This trend reflects a growing attention on mental health issues. Figure 3a shows a generalupward trajectory in the average user rating (AUR) over the years, especially in the past decade.This suggests that newer mental health apps tend to receive better ratings, possibly due toimproved app quality, better user
teaching assistants to programs with higher undergraduate teaching loads, and identifyopportunities for more balanced teaching loads across programs with varying needs or capacityto teach in other similar programs. At the same time, department leadership and committees haveused the data to help faculty reflect on their balance between teaching and research.Case Study 3: Junior Course EnrollmentThe following example shows how data and the Dashboards can be used to predict third yearcourse enrollments for the Aerospace Engineering program. Many engineering curriculaexperience a spike in program specific courses during the third year since many students takemany foundational courses in math, science, and general education during their first two
and examining datasets include proximity, recency,and size. In a review on data science tools, researchers found that most datasets were either”fresh” or not time-relevant (recency), very small in size, and used real data that youth can beexpected to be familiar with (proximity) [40]. This was in accordance with another, large-scaleK-12 data science dataset review, wherein 296 datasets in K-12 data science curricula wereevaluated to identify trends and best practices [41]. The findings showed that most datasets weresmall, recent, and did not reflect student interest, though they were typically familiar to students.The importance of considering diverse learners and student interests when choosing datasets wasexpounded by the authors. Another
development.Adverse weather conditions, such as rain, snow, and fog, further complicate the functionality ofAVs [11]. These conditions impair the accuracy of sensors like cameras and LiDAR, reducing thereliability of the perception systems. Limited visibility, reflections, and other environmentalinterferences can lead to erroneous object detection, increasing the likelihood of accidents.Addressing these weather-related challenges is crucial for enhancing the robustness of AVsystems.Real-time processing is another significant obstacle for AVs, as they require substantialcomputational resources to process vast amounts of data from multiple sensors simultaneously[24]. To ensure timely decision-making, AV systems must maintain low latency while handlingcomplex
aligns with established research showing that authentic data and real-world applicationsenhance student motivation and learning outcomes in STEM education [20], [21]. When studentswork with actual disciplinary data rather than constructed examples, they better understand therelevance of data science to their future careers and develop more realistic expectations aboutdata analysis challenges they might encounter professionally. However, instructors faced a common pedagogical challenge in balancing breadth versusdepth of topic coverage. This tension emerged particularly when deciding between teachingmore data science skills thoroughly (depth) versus covering more disciplinary content (breadth).This challenge reflects a well-documented
set of features are implements most features are reflects the implemented. of the core imple- project features, mented. description. It including The printed implements all restaurant history key features such selection, menu does not as arithmetic display, make sense operations, order-taking, and because trigonometric total calculation