test usage in engineering courses. Tests and exams are typically heavily usedin FECs like statics, dynamics, thermodynamics, and other courses in various engineeringdisciplines. Understanding why engineering instructors heavily rely on tests to assess studentlearning in these courses can be crucial in promoting the use of more diverse types ofassessments, such as portfolios, concept inventory, reflection-based practices, project-basedpractices, and intentionality in terms of designing, administering, and interpreting tests, butresearch has been scarce on documenting research on this topic. Conversations around why instructors make certain course decisions typically involve thecontexts these instructors are situated in, emphasizing how
around making researchopportunities accessible and also suggest what can be done in class instruction to provide similarbenefits to student curiosity. In the current study, we found that students reported that classesencouraged their curiosity when the students encountered uncertainty that led to informationseeking, were able to see connections to real world applications and when they had engaginginstructors. Redundant content, overwhelming classes, time constraints, motivation to get the“right” answer, and critical professors were described as obstacles to students’ curiosity inclasses. Students also reflected on how their experiences of curiosity in research compared totheir classes in ways that aligned with the identified supports for and
]. Inter-rater reliabilitywas not calculated numerically due to a focus on consensus [21], [27]-[30].Results & DiscussionPractitioners' definition of engineering intuition did not vary by level of experience but did varyby gender. Men more frequently defined the concept in terms that reflected Innate whereaswomen leaned on Experience in their definitions. Despite these differences in how engineeringintuition was defined, there was largely consensus in participants’ responses to how engineeringintuition is developed. All participants attributed the development of intuition either completelyor in part to Experience, underscoring the notion that intuition develops alongside expertise, asexpertise is largely developed through experience [8]-[12
Final lab report 120 points Lab notebook checks 100 points total Weekly reflections 150 points total Oral Hypothesis & update presentations 50 points total communication Poster presentation Poster draft presentation 30 points Final poster presentation 100 points total TOTAL (subject to 900 points total change)2.4. Learning objectivesScientific Method: This course
learning activities to promote students’ deep learning.Cognitive psychology literature shows that students do not necessarily learn concepts deeply bysolving problems, unless they monitor their thinking and decision-making process before andduring problem solving, and reflect on the process after will help to conditionalize theirknowledge, i.e., when to use what knowledge to solve the problem.In this paper, we present a study on a multidimensional approach to enhancing students'reasoning skills by integrating a variety of explanatory learning activities, namely oral exams,written guidance prompts for homework which asks students to justify their problem-solvingprocess, and video assignment in which students perform group-explanation on
language and cultural resources and how students draw on differentsets of talk depending on the context, whether near or distal from the activity at hand. It contendsthat without a deeper understanding of the role of non-dominant ways of speaking in the act ofbecoming and belonging, efforts to diversify engineering will remain elusive. Ultimately, thispaper summarizes these ideas through a conceptual model for engineering learning environmentsthat value and leverage the resources that students bring from their communities. By creatingmore equitable and socially just solutions, engineering education can better serve the needs ofdiverse populations and ensure that the profession is truly reflective of the communities it serves.Keywords: language and
order to meet the requirements forparticipation, the students had to be taking their first semester of coursework in the engineeringprogram. Participants were asked to complete interviews and surveys at the end of the fall andspring semesters. The interviews and surveys had participants reflect on their experiences in theirmath, science, and engineering classes and involvement in engineering activities. Questions fromthe interviews were based on the previously discussed models of affect and engineering identity.This study uses data from the first two semesters. A total of 17 participants completed the firstround of interviews and 13 participants completed the second interview. Three participantsillustrating a range of strengths in their
inaccurate or may nothave been appropriately maintained by the appropriate institutional office. As CurricularAnalytics is applied more broadly, reflecting on current practices and considering how thisframework can be expanded to capture more nuanced curricular representations is valuable. Research AimThis paper recounts the obstacles encountered during the data collection process for alongitudinal multi-institution project employing Curricular Analytics and offers the dataconventions we developed to overcome those obstacles. We outline these procedures not onlyfor transparency, but also to assist other researchers and practitioners who want to use theCurricular Analytics framework at scale. Given the lack of
fluidmechanics concepts. Participants were provided with a worksheet to guide them during theexperiment. The worksheet contained steps for the participants to perform during the experiment.The worksheet allowed the participants to think and reflect on the concepts being taught.Afterward, each participant was given a post-test to examine how much they had learned duringthe instruction. They were then required to respond to the motivational/engagement survey.Participants received links to the online motivational survey administered via Qualtrics© at theend of the LCDLMs sessions. The survey prompts asked participants to reflect on their LCDLM-facilitated instructions and report how well they believed experiencing LCDLMs instructionhelped them to engage in
EngineeringFundamentals, and might be reflective of the more restricted focus of such degree programs. Assuch, there are several disparities between industry expectations and educational programs.Considering the industry expectations as a baseline, this enables the identification of broad wayscurrent programs might adjust their curriculum to better prepare future technicians or engineersto enter the workforce, or to help current workers upskill for new positions in emergingautomation, robotics, and mechatronics fields as efficiently as possible. This study has several limitations that should be recognized. For instance, the sample ofindustry professionals is limited in many ways and does not encompass the entire range ofprofessions within the field of
abilities are affected byfactors such as lack of access to training facilities, increased stress levels and burnout, andreduction of urban navigation.Limitations There are some potential limitations to the work. One involves the potential of seasonaleffects as the tests which were administered during spring for both groups. Additionally theparticipants reflect a convenience sample that was drawn from the BLV population. Theparticipant population spans a large range of ages and due to the population size in thepre-COVID and post-COVID groups the research was unable to be segregated into smaller ageranges. Finally, there are different levels of vision within low vision participants and even thoughparticipants wore blindfolds this does bring a
. ©American Society for Engineering Education, 2023 The development of an artificial intelligence classifier to automate assessment in large class settings: preliminary resultsAbstractThis evidence based practice paper presents preliminary results in using an artificialintelligence classifier to mark student assignments in a large class setting. The assessmenttask consists of an approximately 2000 word reflective essay that is produced underexamination conditions and submitted electronically. The marking is a simple pass/faildetermination, and no explicit feedback beyond the pass/fail grade is provided to the students.Each year around 1500 students complete this assignment, which places a significant andtime-constrained marking load
Jared Markunas who assisted in the development of the survey that will inform the engagementguide prototype.References[1] D. R. Fisher, A. Bagiati, and S. Sarma, “Developing Professional Skills in Undergraduate Engineering Students Through Cocurricular Involvement,” J. Stud. Aff. Res. Pract., vol. 54, no. 3, pp. 286–302, Jul. 2017, doi: 10.1080/19496591.2017.1289097.[2] G. Young, D. B. Knight, and D. R. Simmons, “Co-curricular experiences link to nontechnical skill development for African-American engineers: Communication, teamwork, professionalism, lifelong learning, and reflective behavior skills,” in 2014 IEEE Frontiers in Education Conference (FIE) Proceedings, Madrid, Spain, Oct. 2014, pp. 1–7. doi: 10.1109/FIE
adjacent activities context. Kirn & Bensonfound that students’ choices in the present, including how they solved engineering problems, wereconnected to how they thought about their futures. In our study, we wonder whether students’engineering-adjacent participation may also be connected to their FTP development. We anticipatethat a majority of Kirn & Benson’s interview questions [5], some of which we adapted to ourcurrent context while others were added or removed, will help us explore connections betweenstudents' current actions and their future goals. To better capture students’ actions, we havedeveloped interview questions to guide participants to reflect on their future goals, share theirpresent actions related to involvement in
useful subscales that associate with SRMDM. The revised instrument which wasdeveloped through several iterations (Orr, Martin, Ehlert, Brotherton, & Manning, 2021) (Ehlert,et al., 2019) is called the Multidimensional Inventory of Decision-Making Competency (MIDC)(Ehlert, et al., 2019).MIDC is based on four factors: Impulsivity, Avoidance, learning, and Information Gathering.Impulsivity encompasses making a decision without considering the consequences; Avoidancetargets refraining from making decisions for oneself and allowing other people (i.e. parents orfriends) to make decisions on their behalf; Learning focuses on reflecting on past decisions andInformation Gathering, which includes collecting information, assessing strategies
average grade forgroup A. The blue bars represent anonymous exams, while the red bars indicate non-anonymousexams. As noted earlier, the final exam had a lower average score, which is reflected across the 3ethnicities shown. Figure 4 also shows that anonymizing the exam leads to performanceimprovement for Ethnicity 2. Ethnicities 1 and 3 showed no difference. Figure 4: The average grade by ethnicity for the 4 exams considered for Group A in Class A. The error bars represent the standard error. Group A started with anonymous exams and then switched. Figure 5: The average grade by ethnicity for the 4 exams considered for Group B in Class A. The error bars represent the standard error. Group B started with non
the engineering school. Please note thatthe collection of the 2020 survey data was completed just before the breakout of the COVID-19pandemic in March 2020 in North America; thus the data reflected the student experiences priorto the pandemic.The bulk of these data sets were from the National Student Engagement Survey (NSSE) data thatthe university collected on a three-year basis (that is, 2017 and 2020 data). We included the 5following variables from the NSSE data into our study: 10 engagement indicators that fall underfour themes (i.e., academic challenge, learning with peers, experience with faculty, and campusenvironment),1 six variables
-making process that maynot have emerged organically (Crandall et al., 2006). The questions in the fourth sweep arebroadly divided into four categories, 1) expert-novice contrasts, 2) hypotheticals, 3) experience,and 4) aids. Question prompts include, "Would a novice have noticed the same cues you did inthis situation?" or "How could additional training have offered an advantage here?"(Crandall etal., 2006). Some of the prompts are skipped if they were covered in earlier discussions on theproblem.At the conclusion of the CDM, the interviewers determine if enough information has beencollected to satisfy the eight dimensions of KAM. Reflecting on the results of the interview sofar, the interviewers determine which of these dimensions require
, or recommendations expressed in thismaterial are those of the author(s) and do not necessarily reflect the views of the sponsors.References[1] H. Okahana, C. Klein, J. Allum, and R. Sowell, “STEM Doctoral Completion of Underrepresented Minority Students: Challenges and Opportunities for Improving Participation in the Doctoral Workforce,” Innov High Educ, vol. 43, no. 4, pp. 237–255, Aug. 2018, doi: 10.1007/s10755-018-9425-3.[2] R. Sowell, J. Allum, and H. Okahana, Doctoral Initiative on Minority Attrition and Completion. Washington, DC: Council of Graduate Schools, 2015.[3] B. M. Gayle, D. Cortez, and R. Preiss, “Safe Spaces, Difficult Dialogues, and Critical Thinking,” ij-sotl, vol. 7, no. 2, Jul. 2013, doi: 10.20429/ijsotl
generated will be valuable for educationalpolicy, philanthropic support, and employer decisions, guiding strategic investments in designand fabrication studios to enhance workforce skills development. This study has two parts; thefirst employs qualitative methods, consisting of interviews and focus groups with over 48students, 15 alumni and 15 employers to identify common themes that reflect makerspaces’impacts on students’ careers. From this data, we aim to create a universal framework forassessing the link between makerspace experiences and career readiness across diverseinstitutions and studios. The second part of the iterative study will consist of the development ofa quantitative survey instrument utilizing this grounded, qualitatively
StatisticsThe descriptive statistics provide insights into the participants’ characteristics and perceptions inthe study. Cumulative GPA, a measure of academic performance, shows a mean of 3.63 (SD =0.350) out of 4.00, indicating that participants generally achieved high levels of achievement.Personality traits such as Extraversion and Task control, which were rated on a 7-point scale,reflect the participants’ tendencies in group settings. The mean of 4.52 (SD = 1.418) forExtraversion indicates a propensity to actively contribute in groups, while the mean of 3.69 (SD= 1.442) for Task control suggests a balanced approach to task delegation. The mean of 7.60 on a9-point scale (SD = 1.52) indicates positive perceptions of team members’ contributions
to their team, which can help or hurt the team's productivity. The course instructor is not involved in most team interactions and, thus, is less equipped to judge the influence of individual students on team dynamics. Peer evaluation tools fill this gap by eliciting feedback from the people most familiar with the team (i.e., team members). This process informs the instructor about team dynamics and helps teams improve their dynamics and performance [17].To utilize peer evaluation opportunities to improve team performance and reflect on areas ofindividual growth, students must be familiar with desirable teamwork behaviors and must be ableto clearly communicate constructive feedback to their peers. Unfortunately, it is rare for peerfeedback
engineering professoriate, and leveraging institutional data to support reflective teaching practices. She has degrees in Electrical Engineering (B.S., M.Eng.) from the Ateneo de Davao University in Davao City, Philippines, where she previously held appointments as Assistant Professor and Department Chair for Electrical Engineering. She also previously served as Director for Communications and International Engagement at the Department of Engineering Education at Virginia Tech, Lecturer at the Department of Engineering Education at The Ohio State University, and Assistant Professor at the Department of Integrated Engineering at Minnesota State University, Mankato. She holds a Ph.D. in Engineering Education from Virginia
learning in your academic setting (pp. 93-110). Society for the Teaching of Psychology.[12] S. Freeman et al., "Active learning increases student performance in science, engineering, and mathematics," Proc. Natl. Acad. Sci. USA, vol. 111, no. 23, pp. 8410-8415, May 2014, doi: 10.1073/pnas.1319030111[13] S. Anwar and M. Menekse, “Unique contributions of individual reflections and teamwork on engineering students’ academic performance and achievement goals,” Int. J. Eng. Educ., vol. 36, no. 3, Art. no. 3, 2020.[14] S. Anwar, "Role of different instructional strategies on engineering students' academic performance and motivational constructs," 2020.[15] A. I. Leshner, "Student-centered, modernized graduate
to advance equity and inclusion, and using data science for training socially responsible engineers.Muhammad Ali Sajjad, University at Buffalo, The State University of New York First year, first semester PhD student in Engineering Education at University at Buffalo. ©American Society for Engineering Education, 2024 Work in progress: stigma of mental health conditions and its relationship to conditions’ knowledge and resource awareness among engineering students.AbstractThis work in progress paper considers intergroup contact theory to explore how increasedawareness of mental health resources and heightened contact with people living with MHCsamong engineering undergraduate students reflect in lower
, andthe application of knowledge and skills to problems that are representative of those faced bypracticing engineers” (p. 124) [8]. As such, learning effectiveness is first and foremostunderstood as relating to certain outcomes.However, measures of learning effectiveness go well beyond learning outcomes. Other measurescan be attitudes such as motivation [9, 10], satisfaction [9, 11], and initiative [7]. Some studiesmeasured learning effectiveness based on resources, teaching activities, and services provided[12], or instruction, curriculum management, and technological media [2]. As these measuresbetter reflect aspects of teaching practices, they may better represent teaching effectiveness thanlearning effectiveness. Notably, learning
withreflection behaviors and academic performance. The results indicated a mastery approachsignificantly affected exam scores and the total number of reflections, while a performance-approach only affected exam scores [56]. The findings suggest that mastery-approach studentswould adopt self-reflection strategies at higher rates than performance-approach students. Asimilar pattern was found in a study of motivational orientations in pharmacy students and theirexam scores on multiple-choice and short-essay exams [57]. Findings indicated that the mastery-approach orientation correlates with higher scores on essay exams, while performance-avoidanceorientations correlate with lower scores on either exam type. These results align well with theliterature, as
quantitatively analyze how such reflection related to achievementgoals. In another example of NLP-in-the-loop, Zhang et al. [22] used NLP to identify bias,unseen relationships, and missed coding opportunities among teachers’ responses regardingquestions related to the digital divide. The authors first used traditional methods of qualitativeanalysis to arrive at a set of thematic codes, then they used NLP techniques to cluster the surveyresponses and examined the semantic content captured by these techniques. They compared thethemes resulting from the traditional approach to those arrived at through NLP to identifyincongruities associated with errors and inconsistencies among human coders.Our study focuses primarily on the fourth broad category of using
and thinking styles, whereas higher analytical thinking scoresindicate more logical, rigid writing and thinking styles [9]. Lower clout scores indicate more of aself-focus, a “follower” not caring as much about relative social status, whereas higher cloutscores indicate a “leader” with more focus on dominating the others in a group [10]. While lowerauthenticity scores can reflect a measure of deception, they also indicate a prepared or sociallycautious response, whereas higher authenticity scores indicate more spontaneous, complex,honest, and unfiltered conversations [11], [12]. Lower emotional tone scores indicate a morenegative attitude, whereas higher emotional tone scores indicate a more positive outlook in thetext [13]. LIWC provides
-identified as part of a racial or ethnic minority; the remainder identified as White.Each of these seven students participated in one 60–90-minute semi-structured interview [54-55].Interviews were designed to create a space for the participants to reflect on their K-12experiences and how those K-12 experiences influenced their decision to major in engineering.The first three student participants were interviewed in-person in a private office on theuniversity campus. The remaining four students were interviewed via Zoom. As a first step to theinterview, all participants were asked to develop a timeline of their formative experiencesleading to becoming an engineering major. Timelines were developed initially by students at thebeginning of the