Paper ID #42554Use of Sentiment Analysis to Assess Student Reflections in StaticsDr. Amie Baisley, University of Florida I am an Instructional Assistant Professor at the University of Florida teaching primarily 2nd year mechanics courses. My teaching and research interests are alternative pedagogies, mastery-based learning and assessment, student persistence in their first two years, and faculty development.Chiranjeevi Singh Marutla, University of Florida ©American Society for Engineering Education, 2024 Use of Sentiment Analysis to Assess Student Reflections in StaticsIn a flipped
Paper ID #41293Using Scaffolded Exams and Post-Exam Reflection to Foster Students’ MetacognitiveRegulation of Learning in a Mechanics of Materials ClassDr. Huihui Qi, University of California, San Diego Dr.Huihui Qi is an Associate Teaching Professor in the department of Mechanical and Aerospace Engineering, at the University of California San Diego.Isabella Fiorini, University of California, San DiegoEdward Zhou Yang Yu, University of California, San Diego Edward Yu is a third-year undergraduate student at UC San Diego majoring in Aerospace Engineering with a specialization in Astrodynamics. Edward mainly assists with the
correctly while only two managedto determine the weight of the plate correctly. Several students referred to using tabulated data orsimpler shapes in other courses to find the centroid and this lack of practice with equations beinga barrier to success in solving the problem used in this study which does not use a simple shape;“So you have areas which you can find by, by just like simple shapes. And then those have likeknown centroids. And then you can just do sum of centroid times area divided by sum of area forthis because your thing is modeled by an equation, you can't do that. So my dilemma now isremembering the formula.”(5) Solution Evaluation; the only student to obviously display reflective and evaluative practicewas the individual who
exams are well written [2]. Even in the context of standardized testing ithas been found that student GRE scores compared to student written responses had a highcorrelation between the results [3]. Multiple-choice tests can be valid assessment instruments ifwritten correctly, which has led to many concept inventories being created in STEM, like theMechanics Diagnostic Test, Force Concept Inventory, Statics Concept Inventory, DynamicsConcept Inventory, and many others [2, 4, 5].Often MCT are used as pre-/post-tests to try to identify changes in learning. The quantitative resultsof these multiple-choice tests provide easy comparison data when looked at from a pre-/post-testanalysis, but the scores do not always adequately reflect a learning
Biotechnology in the Division of Science and Technology at the United International College (UIC) in Zhuhai China. She has trained with ASCE’s Excellence in Civil Engineering Education (ExCEEd) initiative, been exploring and applying evidence-based strategies for instruction, and is a proponent of Learning Assistants (LAs). Her scholarship of teaching and learning interests are in motivation and mindset, teamwork and collaboration, and learning through failure and reflection. Her bioengineering research interests and collaborations are in the areas of biomaterials, cellular microenvironments, and tissue engineering and regenerative medicine. She serves on leadership teams for the Whitaker Center of STEM Education and the
two instructors in fall 2022. These sections administered the same assessments onthe same schedule but did not use the hands-on curriculum.We compare learning outcomes between the control and intervention sections as measured by thescores on the assessments described above as well as final course grades. Larger pre/post gainson the TRCV across all intervention sections is evidence that the modeling kit producedimproved learning gains with respect to vector concepts and representations. We also sharereflections from the two faculty participants regarding their experiences teaching with themodels. Overall, the instructors’ experiences and reflections demonstrate the importance ofadapting an outside curriculum to the specific educational context
,contiguity matching graphics all adjacent to their virtual graphic in 3D space; Figure 1(b). Users can opt to have guidance from an animated virtual Create or animate hand that overlays the user's right hand and slowly curlsEmbodiment objects to reflect its fingers while the user simultaneously performs the humanesque motions right-hand rule on two vectors; Figure 1(c). As shown in Figure 1(d), each module is divided into several tasks as
acrylic specimens subjected totension and torsion loading. Isotropic bodies subject to a two-dimensional stress, while withintheir elastic limit, will reflect light like a doubly refracting crystal [25]. The authors used twopolarizing filters; one between the camera and the specimen and one at a ninety-degreeorientation to the other between the specimen and a light source, as shown in Figure 7 for bothtension and torsion tests. Due to the directional light requirements and the resulting low light, astandard video camera at 60 frames per second was utilized for video capture. Additionally, theauthors did not utilize the high-speed camera for capture because it only records black and whitevideo. This negates the capture of visually stunning and
into an Excel sheet. The responses to the last question were copied and pasted intoChatGPT with the prompt: I asked students what they found most confusing or interesting about an assigned reading. Their responses are below. Summarize them according to what was interesting and what was confusing.Thankfully, the responses did not need to be formatted or edited for ChatGPT to distill rows oftext into a short, concise list. The first few times this method was employed, the efficacy ofChatGPT’s summary was verified with the author’s own review of the student responses. It wasfound to be both an exhaustive and accurate reflection of what the students said. An example ofone of ChatGPT’s summaries can be found in the
solution to a Dynamics questionReferences[1] B. Memarian and T. Doleck, “ChatGPT in education: Methods, potentials and limitations,”Computers in Human Behavior: Artificial Humans, vol. 1, no. 2, p. 100022, Oct. 2023,doi: https://doi.org/10.1016/j.chbah.2023.100022.[2] E. L. Hill-Yardin, M. R. Hutchinson, R. Laycock, and S. J. Spencer, “A Chat(GPT) about thefuture of scientific publishing,” Brain, Behavior, and Immunity, vol. 110, Mar. 2023,doi: https://doi.org/10.1016/j.bbi.2023.02.022.[3] H. Yu, “Reflection on whether Chat GPT should be banned by academia from the perspectiveof education and teaching,” Frontiers in Psychology, vol. 14, p. 1181712, 2023.doi: https://doi.org/10.3389/fpsyg.2023.1181712[4] J. Qadir, “Engineering Education in the Era
lecture style class using the SMART Assessment approach, and (3) a lecture style class with 3levels of student participation worked into the class to engage both reflective and active learners.The instructors chose several standard dynamics problems to analyze, where each instructortailored the problem statement for their course and included how they would require the studentsto solve the problem and how they would evaluate the solution. These problems will be assignedfor future exams in each instructor’s class, graded in their own style, and then evaluated as a teamto assess student learning outcomes. This work-in-progress paper will present the differences inthe style of the problem statement, solution, and evaluation for some of these dynamics
applications of statics. 3.03% 9.09% 18.18% 21.21% 48.48% Strongly agree Agree Neutral Disagree Strongly disagree Figure 14: Answer distribution for question sevenThe eighth survey question prompted students to reflect on whether the activity led them toobserve more real-world applications of Statics. While 33% of students responded neutrally,nearly 40% expressed some level of agreement, compared to less than 30% who disagreed. Thissuggests that the activity’s real-world scenario
dropin post-course evaluation scores (Tab. 1; ‘21). Post-course evaluations revealed significant issues withthe design of the new MBL approach. Reflecting on the student feedback led to the establishment of aset of best practices that could improve the development and delivery of future MBL assessments.Redesigning the MBL assessment following these principles resulted improved post-course evaluationsduring the 2022 and 2023 offerings of MD1 (Tab. 1; ’22-‘23).The best practices used to improve the MBL approach for MD1 are briefly summarized below:Best Practice 1: Each mastery skill should only evaluate one well-defined skillIt is recommended that skills requiring complex multistep solutions are broken into separate skills. Forexample, a vaguely