out-of-classroom assignmentsincluding homework and large programming assignments (usually called projects). In the UnitedStates, the student learning assessment in such courses incorporates the assessment of everycomponent of coursework, and the overall grade for the course is typically based on the weightedaverage of scores for labs, projects, homework, and tests (e.g. midterm and final exams, lab tests,quizzes)8. This approach is comprehensive and thus is satisfying for instructors, since it allows amulti-aspect evaluation of student knowledge. It is satisfying for students as well, since itincludes all portions of student workload and averages the results throughout the term – a lowscore in one course component can be neutralized by a
juniorfaculty observing senior.We have not yet heard any concerns regarding this. The Triangles are diverse by academic rank,again in large part because we are limited by scheduling in how they are formed. We hypothesizethat, due to the formative aims of the program, faculty rank has not been reported to influencethe dynamics of the Triangle interactions.ConclusionsTwo years ago, faculty in the Electrical Engineering and Computer Science Department at theColorado School of Mines set out to develop a program of peer assessment of teaching with theobjective of improving teaching in the department. While the assessment of the program is on-going, faculty participation remains high and attitudes towards the program are positive. Becausethe program is
“alternateconceptions”).1 The topic has garnered considerable interest among engineeringeducators over the past few years and several concept inventories on engineering-relatedtopics are being developed, most notably by the group led by Evans associated with theFoundation Coalition.2 The goal of our project, funded by the Assessment of StudentAchievement (ASA) program at NSF, is to develop and test an inventory for the thermaland transport sciences, based on the model of the Force Concept Inventory pioneered byHestenes and colleagues.3 Once our CI has been developed and validated, it will be madeavailable to interested engineering faculty for use as a classroom formative assessmenttool that can provide valuable information for tracking student understanding
traditional role of teaching and administering a modest research program. At Trine University, a small private school in Angola, Indiana, Scott taught ten different courses from introductory freshman courses to senior design, while serving as advisor to many undergraduate research projects. For the last four years, Scott has been at York College of Pennsylvania where his concentration is on undergraduate education in mechanical engineering.Dr. Tristan Martin Ericson, York College of Pennsylvania Dr. Tristan Ericson is an assistant professor at York College of Pennsylvania. Prior to this appointment, he was a visiting professor at Bucknell University and received his PhD from Ohio State University in 2012. His research
this initial phase of the course, student teams are trying to decide what their projects willneed to do in order to be successful. Typical activities will include researching previouslyimplemented solutions, asking basic questions, and finding information that will pertain to theirparticular project. Therefore, a GSI must act as a mentor in this situation because his/her pastexperience will play a large part in guiding the students. Team meetings at this point in thecourse will serve to motivate the students and help them form a path for their project to follow.The GSI should offer advice on how to appropriately begin their project, act as a rudimentarycustomer, and prevent the teams from creating projects that are too large or small for the
entire cohort have achieved the proper level of demonstration of an outcome byperformance level and percentage of overall grade. A large portion of this assessmentprocess involved matching graded activities with specific ABET outcomes, weighting theimportance of each activity toward demonstration of outcome accomplishment, andevaluating accomplishment based on grade percentages. A time consuming, but wellconceived upfront process yielded valuable program assessment results that could becompiled in a reasonable time frame.The process, rubrics, data collected over two cycles, assessment of the results andchanges instituted is presented. The program results of the 2008 fall ABET visit will bepresented as well as how the use of the senior design as
Paper ID #37301Work in Progress: Assessing Undergraduate Engineering Students’ CareerSocial CapitalAdrian Nat Gentry, Purdue University Adrian Nat Gentry is a Ph.D. student at Purdue University in Engineering Education. They completed their undergraduate degree in Materials Engineering from Purdue in May 2020. Adrian’s research interests include assessing student supports in cooperative education programs and the experiences and needs of nonbinary scientists. Adrian is involved with Purdue’s Engineering Education Graduate Association and the oSTEM chapter at Purdue.Dr. Eric Holloway, Purdue University at West Lafayette (COE
other informal co-curricular programs. Although it is imperative to evaluate these programs to better informentrepreneurship education practices, minimal attention has been devoted to assessment ofentrepreneurship education programs. Furthermore, of the few existing studies, most haveexamined students’ perceptions of learning gains and affective responses such as entrepreneurialself-efficacy, mindset and attitude. In this study, we present an examination of students’ actuallearning in an entrepreneurship practicum course at large research university. The courseleverages widely used Lean Launch Curriculum and Business Model Canvas (BMC) to engagestudents in entrepreneurship in a project-based learning environment. In contrast with prior workthat
Education, 2008 A Structured Assessment Framework for TeamworkAbstractAnecdotal evidence from students shows that ACU undergraduates have difficulty managingtheir time due to various commitments and responsibility outside university. As such, this paperproposes a cooperative learning model which endeavors to help students utilize their timeoptimally in a first year programming course in MATLAB. Included in this model is a structuredassessment framework, as well as teamwork training to facilitate effective teamwork strategy.This model also places emphasis on strong alignment of curriculum objectives to progressiveassessment tasks.To deploy this framework, a MATLAB programming project is designed to be just large enoughfor a group of 3
provided a high level of mentorship and direction. With the Index Terms—Eporfolios, internship, experiential learning, growth of engineering, technology, manufacturing (and manyassessment, ABET other areas) with industrialization, urbanization, modern warfare, and needs of large populations, academic programs I. INTRODUCTION were designed to rapidly provide engineers in large numbers. A. Experiential learning and assessment challenges Unfortunately, this reduced the opportunity for direct
linked directly to student careerchoice.14-15 Mamaril et al. recently validated an instrument to measure students’ self-efficacy inrelation to engineering.16 The instrument is broken down into sub-scales to assess students’beliefs about their general capabilities and specific types of skillsets important to engineering(e.g., experimental skills, design skills). Each subscale is assessed with four or five Likert-stylestatements about which students rate their certainty.In this study, a unit operations laboratory course at a mid-sized private university was redesignedto incorporate project-based learning so as to encourage development of the skills and self-efficacy described above, as well as increase student learning and engagement. In addition
asoftware construction course. This course is a core requirement in a graduate program in softwareengineering at a large research university. While the body of research gives strong evidence that there aremany benefits to implementing peer and self-assessment, concerns remain. Two concerns are that studentswill inflate their evaluation of themselves and that they may collude to give each other high ratings(“cronyism”). These concerns motivated this exploratory study of student bias in peer and self-assessment in a graduate engineering program. Our results confirm previous research that students tend torate themselves higher than their peers, but we found no evidence of cronyism.I. IntroductionStudent assessment can be complex task for an
year. Thecurrent Partnership Challenge involves a partnership with a local homeless shelter andeducation program; the planning for the challenge started in January 2009, with achallenge launch date of February 24, 2010. Other challenges, such as Rube Goldberg,can be executed in as little as three weeks, with the majority of the time being spent onrefining the restrictions on the build.Challenge PartnersIn the two large-scale challenges (the Social Awareness Challenge and the PartnershipChallenge), the instructional team looks for partners in either industry or the non-profitsector that will offer additional depth to the challenge. In many cases, the team hasapproached, or been approached by, a non-profit group who were looking for
to flexible deadlines to match what students’ may be expecting for theircoursework.We specifically investigate the impact of differing levels of leniency in different assignments (lowstakes content learning, and high stakes programming projects and content assessments). Ourgoal is to determine the impact of these policies on student time management skills, academicperformance, and overall stress levels. Additionally, the goal was that instructors would report amore positive teaching experience as a result of the policy.MethodsThis large course is a blended first year engineering course that focuses on computerprogramming skills at a Midwestern doctoral granting institution. In the Fall 2022 semester, therewere 649 students, 20 teaching
Paper ID #42088The Challenges of Assessing In-the-Moment Ethical Decision-MakingMs. Tori N. Wagner, University of Connecticut Tori Wagner is a doctoral student at the University of Connecticut studying Engineering Education. She has a background in secondary science education, playful learning, and digital game design.Dr. Daniel D. Burkey, University of Connecticut Daniel Burkey is the Associate Dean of Undergraduate Programs and the Castleman Term Professor in Engineering Innovation in the College of Engineering at the University of Connecticut. He earned his B.S. in Chemical Engineering from Lehigh University in
their readiness for self-directed learning. The students were given the SLDRS as a pre-test and post-test to determinewhether the new courses enhanced their readiness for self-directed learning. These two newcourses are briefly described and the results of the assessment are presented.IntroductionThe ABET Engineering Criteria 2000 (EC2000) bring lifelong learning to the forefront forengineering educators. In the past, our role in lifelong learning was primarily offering coursesand degree programs for practicing engineers through continuing education and on ourcampuses. Now EC2000 demands that we prepare engineering students to engage in lifelonglearning. While this demand on faculty and curricula to prepare students for lifelong learning isnew
has taught this course numerous times in a traditional format that uses lecturescombined with active learning. While small improvements in course achievement have been seendue to minor improvements (adding iClickers, adding “Gateway” quizzes), overall the failurerate in the author’s class has remained fairly steady over almost 20 years at about 22% (A failureis considered to be a grade of D+ or below because at the author’s institution that is the minimumgrade needed to move to the next course.).In Summer 2018, the author attended the ASEE National Conference and attended KurtDeGoede’s presentation on the implementation of competency-based assessment in anundergraduate dynamics course [1]. This method seemed ideal to help students
in 2019 with an implementation guide the following year. Work on CS teacher endorsement standards are also being developed. Dr. Weese has developed, organized and led activities for several outreach programs for K-12 impacting well more than 4,000 students. ©American Society for Engineering Education, 2024 Developing an Instrument for Assessing Self-Efficacy Confidence in Data Science Safia Malallah, Kansas State University, safia@ksu.edu Ejiro Osiobe, Baker University's, Jiji.osiobe@bakerU.edu Zahraa Marafie, Kuwait University, Zahraa.Marafie@ku.edu.kw Patricia Coronel, ULEAM, patricia.henriquez@uleam.edu.ec Lior Shamir, Kansas State
specified location at a specified time. Beginning in the fall of 1997, SPSU beganoffering its Master of Science with a major in Quality Assurance on the Internet. As a pilotprogram within the University System of Georgia, the process was viewed as an opportunity toevaluate and assess the Internet as a medium for delivering a complete degree program viadistance learning. The following paper discusses a variety of issues including programadministration, curriculum development, and initial student/faculty reaction. INTRODUCTIONThe role of teaching is the transfer of knowledge and to date the most common form of learningwas through apprenticeship. This one-on-one training is too labor intensive for the
students to be successful in their first design projects.At the current time, many engineering programs provide an introduction or overview toengineering design early in the student's academic career. This introduction, to be meaningful,often includes an initial exposure to engineering design through a small design project. Students,however, are generally not prepared to develop solution concepts Ðfrom scratch,Ñ nor are theyprepared to document and describe their solutions in precise engineering terms. This is whereexposure to and use of patent information can have significant impact.Patents, by their very nature, provide conceptual descriptions of solutions, accompanied byannotated conceptual drawings. In design courses at the introductory
(team size, number of faculty advisors, number of graduate students involved, etc.) and also includes the average hours per week the students committed to the design project. Overall, the design teams varied in size, there were teams as small as four and teams as large as twenty-seven. The number of faculty advisors also varied from one to three per team. In at least one-quarter of the teams, there were graduate students involved as well. Moreover, the time spent on the project per week varied from two to thirty-two hours, where the average time students spent on their design project was about 10.7 hours/week. Survey results also show that they wished to spend more time on their project – 2.6 hours more per week on average (equivalent
program by a former student. The purpose of thesurvey was to determine what had happened to our graduates after leaving the program, and itwas focused on graduates who had experienced the original curriculum. A large amount ofinformation was gathered from the 56 surveys returned out of 183 sent out. The distribution ofthe 56 respondents by year of graduation is shown in Table 2. The skewing of the responsedistribution toward more recent graduates both reflects our ability to track down more recentgraduates and the relatively low number of graduates in the early years of the program.Responses to specific questions relating to career value were extracted for this study
. She joined the research team in December of 2015 and is currently working on assessing motivation in academia. c American Society for Engineering Education, 2018 Providing Student and Faculty Feedback from Motivation Assessments in Capstone CoursesAbstractStudent motivation in capstone design courses is assessed in six capstone project courses at sixdiverse institutions in the 2017-2018 academic year. This assessment follows a similarassessment study at a large public university in six unique capstone courses. Reliability andvalidity analysis during the first year contributed to upgrades to the assessment tools currentlybeing implemented. Qualitative feedback from student and
provides auseful calibration point for individual contributions.Characteristics of the ProgramThe program includes the following characteristics: The program is situated at a private research university. All projects are approached in an authentic “clinical” real world fashion. Page 15.42.4 A single semester multidisciplinary capstone involving electrical, mechanical, computer systems and industrial engineering students with a common syllabus across all participating departments. A small percentage (less than 5%) of aerospace, biomedical, and materials engineering students also participate and also
2006-1515: BUILDING AND ASSESSING CAPACITY IN ENGINEERINGEDUCATION RESEARCH: THE BOOTSTRAPPING MODELJosh Tenenberg, University of Washington-Tacoma Josh Tenenberg is an Associate Professor in the Computing and Software Systems program in the Institute of Technology at the University of Washington, Tacoma. He holds a B.M. in music performance (San Francisco State University, U.S.A.) and an M.S. and Ph.D. in Computer Science (University of Rochester, U.S.A), where his primary research was in Artificial Intelligence. His research areas have included automated planning, knowledge representation and reasoning, reinforcement learning, temporal logic, and cognitive modeling of computer
, eachprogramming team member performed a peer evaluation of the other team members, plus a self-evaluation, using the “Peer Evaluation: Teamwork and Effective Collaboration Rubric.” Whilenot formatively used here, this rubric can work as a formative assessment tool if implemented atthe conclusion of each of several small-scale projects or periodically during the progress of alarger-scale project. An example use of this rubric is provided in Appendix B.Following submission of the software application and accompanying report, two summativerubrics were applied. The “Client: Program Evaluation Rubric” addressed the extent to which theprogramming teams satisfactorily addressed the learning outcomes, audience, comprehension,visualization, and usability
communication) to be successful [17].Student EngagementStudent engagement refers to the degree of interest and attention shown in course activities.Student engagement can be a predictor for course completion and retention rates [18]. Activelearning techniques such as think-pair-share exercises [19], pair programming [20], peerinstruction [21], and flipped classrooms [22] have been demonstrated to increase studentengagement [11]. Many of these interventions are used for introductory level instruction,primarily to address broadening participation in large classes [23].In software engineering courses, the use of real-world, community-based projects may be aneffective way to engage students with meaningful problem solving while teaching them
transitioning traditional lecture courses into classrooms where active learning takes place. Additional interest includes bridging the gap between physics courses (taught by a physicist) and engineering courses (taught by an engineer).Dr. Amber Harrington, Arkansas Tech University Dr. Amber Harrington is an Assistant Professor of Physics in the Physical Sciences Department at Arkansas Tech University. c American Society for Engineering Education, 2019 Assessing ABET ANSAC and EAC Learning Outcome (2) in Introductory PhysicsAbstractThe physics and engineering physics programs at Arkansas Tech University (ATU) are currentlyin the process of preparing to apply for ABET
approaches to provide non-trivial classification of large data sets. His main teaching interests are crystal plasticity, sta- tistical mechanics, gas dynamics and kinetic theory, numerical methods in engineering, thermodynamics, solid mechanics, mechanics of materials. He is also interested in developing online courses and using online tools for facilitating active learning techniques in engineering classrooms. c American Society for Engineering Education, 2020 E-Learning And Assessment in the Cloud: Engineering Courses S. Papanikolaou1,2 1 Department of Mechanical & Aerospace Engineering, West Virginia University 2 Department of Physics, West
the frequency of the examinations requires more work by the instructor in writing andgrading examinations. This is especially true for classes having large enrollments. In the lastfew semesters, we have tried new ways of assigning homework problems and assessing studentknowledge in our introductory thermodynamics course. Our experience includes large classeswith enrollment exceeding 120 students. This paper describes our experiences in teaching anintroductory thermodynamic course and its effect on student learning outcome. Students weresurveyed in recent semesters to get their feedback on the methods used in teaching the courseand the assessment of student knowledge. This paper provides a summary of the survey results.IntroductionMechanical