Asee peer logo

Evaluating Methods To Improve Teaching In Engineering

Download Paper |

Conference

2006 Annual Conference & Exposition

Location

Chicago, Illinois

Publication Date

June 18, 2006

Start Date

June 18, 2006

End Date

June 21, 2006

ISSN

2153-5965

Conference Session

Faculty Development

Tagged Division

Educational Research and Methods

Page Count

20

Page Numbers

11.602.1 - 11.602.20

DOI

10.18260/1-2--504

Permanent URL

https://peer.asee.org/504

Download Count

507

Request a correction

Paper Authors

biography

Cynthia Finelli University of Michigan Orcid 16x16 orcid.org/0000-0001-9148-1492

visit author page

Dr. Cynthia J. Finelli (cfinelli@umich.edu)is Managing Director of the Center for Research on Learning and Teaching (CRLT) North and Associate Research Scientist of Engineering Education at University of Michigan (U-M). Her current research interests include evaluating methods to improve teaching, exploring ethical decision-making in engineering, developing a tool for comprehensive assessment of team-member effectiveness, and assessing the effect of the first year experience on under-represented student retention. She serves on the Executive Board of the Educational Research and Methods Division (ERM) of ASEE and was the ERM Division Program Co-Chair for the 2003 Frontiers in Education Conference and the 2006 ASEE Annual Conference and Exposition.

visit author page

biography

Amy Gottfried University of Michigan

visit author page

Dr. Amy C. Gottfried is the Chemical Sciences at the Interface of Education Postdoctoral Fellow in the Department of Chemistry at U-M as well as an instructional consultant for CRLT. She has led the development and implementation of a studio general chemistry course. Her research interests include understanding the manner in which students learn chemical concepts and preparing instructors at all levels of chemistry education.

visit author page

biography

Matthew Kaplan University of Michigan

visit author page

Dr. Matthew L. Kaplan is the Associate Director of CRLT at U-M where he focuses on external and university-wide initiatives. He has served on the executive board of the Professional and Organizational Development Network in Higher Education (POD) and has edited two volumes of the POD journal To Improve the Academy. He has written on issues of teaching evaluation, multiculturalism, and the use of interactive theatre for faculty development.

visit author page

biography

Vilma Mesa University of Michigan

visit author page

Dr. Vilma M. Mesa is Assistant Professor and Assistant Research Scientist of Mathematics Education at U-M’s School of Education. Her research interests include undergraduate mathematics teaching, curriculum theory and evaluation in mathematics, and international comparisons of achievement.

visit author page

biography

Christopher O'Neal University of Michigan

visit author page

Dr. Christopher M. O’Neal is Coordinator of STEM Faculty Development at CRLT at U-M. His Ph.D. is in Ecology, and his current research interests include the impact of early evaluation on teacher performance, the value of interactive theatre for building multicultural competencies in educators, and the impact of TAs on student retention in the sciences and engineering.

visit author page

biography

Mary Piontek University of Michigan

visit author page

Dr. Mary E. Piontek is Assistant Research Scientist at CRLT at U-M where she works with individual faculty, departments/units, and schools/colleges that need assistance designing program evaluation and assessing the effectiveness of initiatives to improve teaching and learning. Her research and evaluation techniques capture the local context of an organization or program through individual and focus group interviews, in-depth participant observation, document and archival analysis, survey research, and qualitative/quantitative mixed designs. Her research interests include the changing roles of evaluators and their client/stakeholder relationships.

visit author page

Download Paper |

Abstract
NOTE: The first page of text has been automatically extracted and included below in lieu of an abstract

Evaluating Methods to Improve Teaching in Engineering Abstract

Engineering faculty at a large research institution participated in a project for evaluating methods to improve teaching. Faculty were randomly assigned to one of four separate cohorts (each receiving a different type of feedback designed to improve teaching) and comparative data was collected on each of the four methods. Faculty in Cohort 0: Control served as the control population and did not receive formal feedback of any kind to improve teaching. Faculty in Cohort 1: Ratings Report received a report summarizing student ratings of teaching at midterm. For faculty in Cohort 2: Feedback and Consult, an instructional consultant facilitated a student feedback session at midterm (also known as a small group instructional diagnosis) and then conducted a follow-up consultation with the faculty member. An instructional consultant videotaped a class period for faculty in Cohort 3: Videotape and Consult and conducted a follow-up consultation.

To compare the four methods to improve teaching, data from three separate sources was analyzed. First, student ratings of teaching were collected in the middle of the academic term and again at the end of the term. The ratings were studied and the change in average ratings from the middle to the end of the term was compared to assess the level of teaching improvement. Second, all faculty completed an online survey to assess the method to improve teaching they completed, to rate their own teaching at the end of the term, and to describe their perceptions of the project. Faculty responses were analyzed and compared by cohort. Finally, a focus group for the instructional consultants was conducted to gauge their perceptions of each method, to ascertain the nature of the consultations, and to identify kinds of issues that arose in each consultationi.

From this limited study, it appears that the student feedback and follow-up consultation may have the most positive impact on student ratings of teaching. However, having a class session videotaped and then having a follow-up consultation is also a promising method to improve teaching. Further work to study these methods more clearly is underway.

1. Experimental Design

Faculty teaching full-term, undergraduate, lecture courses in all engineering departments were invited to participate in the projectii. Those who participated were asked to follow a specific protocol for gathering feedback to improve teaching. Then, to evaluate teaching improvement, data was collected and analyzed from three separate sources. Both the protocols for gathering feedback to improve teaching and the methods for evaluating teaching improvement are described in this section.

1.1. Methods to improve teaching

After faculty recruiting was complete, participants were randomly assigned to one of four cohorts (Cohort 0: Control; Cohort 1: Ratings Report; Cohort 2: Feedback and Consult; and Cohort 3: Videotape and Consult). Depending on their cohort assignment, faculty were asked to follow a specific protocol for gathering feedback to improve teaching (described in Table 1).

Finelli, C., & Gottfried, A., & Kaplan, M., & Mesa, V., & O'Neal, C., & Piontek, M. (2006, June), Evaluating Methods To Improve Teaching In Engineering Paper presented at 2006 Annual Conference & Exposition, Chicago, Illinois. 10.18260/1-2--504

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2006 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015