, without having to waituntil all students’ work has been graded. Indeed, peer assessment is one of the fewscalable approaches to assessment: as the amount of work to assess increases, theresources available for assessment increase proportionally.Perhaps the most frequent use of peer assessment is for teaching writing. Writing for anaudience of their peers forces them to explain themselves well enough so that they can be 1understood by non-experts. It also gives them the benefit of seeing and responding totheir peers’ reactions to what they write.Writing is important in engineering, of course. It is a good way for students to grapplewith ethical issues that arise in their professional development [5, 6
group grade to produce a final grade. Note that allof these approaches assume that peer assement is also performed. In principle, staffassessment could be substituted for peer assessment, but (1) this would consume muchmore staff time, and (2) students would miss out on the metacognitive benefits ofevaluating others’ work. It is true, however, that efficiently processing peer assessmentsrequires significant IT support (see Babik et al. [27] for a discussion of the options).Table 1 shows how the four approaches compare. CPR (and the similar training programused by Coursera) contrasts with the other three approaches because (i) it is used toassess artifacts (writing, reporting, etc.) rather than student contributions to a team, andbecause it
using the Fink Model of Backwards Design10 we focused on helping faculty tothink differently about course design and instruction by going to the end of instruction, settingoutcomes, and working backwards to design the course. This faculty development workshop alsoincluded the component of social aspect of learning with other faculty in a learning community,21where they learned new content and strategies, observed demonstrations of new strategies andthen integrated what they learned, and taught a brief excerpt of a lesson to their peers andreceived feedback from the community of learners. Also used as an assessment tool for thisworkshop is an instrument called the Concerns-Based Adoption Model (CBAM),22,23 to measurehow workshop participants
intentionalinvestment over the summer to orient and prepare new faculty members prior to their firstinstructional class with students. This strategy of integrating new faculty into the institution andof developing a classroom training environment has paid dividends with instructors havinggreater success during their first semester of teaching. New faculty members are given theopportunity to understand their role in the larger institutional outcomes, to learn best practicesand techniques, and to practice teach with their peers and mentors, allowing for refinement,before their first class. The department’s faculty development strategy has been recognized bythe Dean and shared with other departments as an exemplary approach to preparing faculty toteach. Written
submit evidence of work in these areas. During the review, thecandidate presents a portfolio with evidence of their work, intended to tell the professional storyof the candidate while on the tenure track. While each candidate tailors his or her portfolio to theinstitutional emphases across the performance categories, there are some common artifacts1: Teaching o Preliminary narrative o Summary of teaching responsibilities o Samples of syllabi o Student evaluations o Peer evaluation of teaching o Examples of graded student work o Examples of experimentation and improvement in the classroom Research/Scholarship o A complete list of journal
aproblem involving (for example) the illustration of a circuit and/or its mathematical expression.With the minute paper, students were asked at the end of class to write down their muddiestpoints, main takeaways, and/or questions based upon their lecture notes. To directly assess theeffectiveness of this new approach, current rubric-derived exam results were compared withprevious exam results, taking GPA into account. We obtained significantly-higher final examscores during the active semester. Semi-structured student interviews were also conductedbefore class sessions and content-analyzed by two analysts to indirectly assess the impact of thetechniques on student learning. Based on the interview data, the very large majority of studentsfound the
graduate education, online engineering cognition and learning, and engineer- ing communication.Dr. Katy Luchini-Colbry, Michigan State University Katy Luchini-Colbry is the Director for Graduate Initiatives at the College of Engineering at Michigan State University, where she completed degrees in political theory and computer science. A recipient of a NSF Graduate Research Fellowship, she earned Ph.D. and M.S.E. in computer science and engi- neering from the University of Michigan. She has published more than two dozen peer-reviewed works related to her interests in educational technology and enhancing undergraduate education through hands- on learning. Luchini-Colbry is also the Director of the Engineering Futures
Engineering Education has been specifically defined and labeled as a discipline [e.g. 6, 7], itis reasonable to apply the general conceptual model to this special case. Therefore, in thediscipline of Engineering Education: Practitioners are classroom instructors, many of whom are also researchers in another engineering discipline. High level practitioners seek to effectively incorporate teaching and learning initiatives supported by the literature of the Engineering Education discipline. Researchers are scholars conducting rigorous, scientific studies in response to engineering education questions and submitting the questions, methods, and results to peer review [8]. Trainers are the engineering
learning, which they can use to make adjustments to their teaching.One definition of formative assessment is offered by Black and Wiliam (2009): Practice in a classroom is formative to the extent that evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers, to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions they would have taken in the absence of the evidence that was elicited. (p. 7)However, there are multiple viewpoints on the methods by which this evidence should beelicited. One view interprets formative assessment as a formal diagnostic test that produces ascore quantifying student achievement
visiting assistant professor at a research one land grant university heforecasted that he would be introduced to many of the same hurdles as proposed byBrent and Felder (1998): Writing proposals and trying to get them funded, attracting and learning how to deal with graduate students, and having to churn out a large number of refereed 2 papers while you were still trying to figure out how to do research. You may remember the incredibly time consuming labor of planning and teaching new courses and the headaches of dealing with bored classes and poor student performance and possibly cheating and poor ratings and a host of other problems you never thought about when
. This could beachieved by showing graders how the grades they assign align with their peer graders (in termsof average and distribution), which tends to influence more extreme graders to become moremoderate25. Alternatively, calibration rounds can be used to establish complex formulas to adjustfor different tendencies4.MethodsContext and data collection. This study investigated grading in the second of a two-semester,first-year engineering course sequence that is required for all engineering undergraduates at alarge Midwestern university. The course employs standards-based grading using a set of 19major learning objectives, each with a set of minor learning outcomes, collectively accountingfor 88 total learning outcomes.The course was offered