Salt Lake City, Utah
June 20, 2004
June 20, 2004
June 23, 2004
2153-5965
22
9.1045.1 - 9.1045.22
10.18260/1-2--13671
https://peer.asee.org/13671
482
Session 1331
Reducing the Workload in Your Class Won’t “Buy” You Better Teaching Evaluation Scores: Re-Refutation of a Persistent Myth
Kay C Dee
Department of Biomedical Engineering, Tulane University, New Orleans, LA 70118
Abstract Although much of the educational literature characterizes the relationship between course workload and teaching evaluation scores as small (with higher workload/difficulty courses rated slightly more favorably), many faculty believe and some reports claim that students “reward” instructors of low-workload courses with good teaching evaluation scores. This study therefore examined whether engineering students’ perceptions of course workload were related to perceived instructor performance. Course-averaged student evaluations of teaching for each class offered through the Tulane University School of Engineering from Fall 1997 to Fall 2002 (878 courses) were collected. Evaluations contained seventeen items rated on five-point scales similar to Likert scales, including an item regarding the instructor’s overall performance and an item regarding the amount of work required for the course. Pearson’s and Spearman’s correlation coefficients and two-tailed significance levels were calculated for evaluation items, the number of respondents, and the general course level. Subsequent model adequacy checking revealed that the course evaluation data violated major assumptions inherent in the use of parametric statistical methods (e.g., Pearson’s correlations, t tests). Therefore, statistical outliers were removed, data transformations (natural log, standardized scores) equalized variance and normalized the distributions of scores within items, and correlation coefficients were re- calculated and compared for each data transformation. Over all analyses performed, the largest correlation between evaluation items regarding the amount of coursework and instructor performance was only 0.15, such that courses requiring more work received poorer evaluations. However, curve fitting revealed essentially no easily- generalized relationship between these two items (R2 = 0.01 and 0.04 for linear and quadratic curve fits). In contrast, scores on other items, such as “the instructor gave organized lectures,” were highly correlated (coefficients between 0.80 and 0.92) with overall instructor performance scores. A number of strong inter-item correlations suggested and factor analysis confirmed that the evaluation form used at Tulane fundamentally assessed two distinct factors, apparently “instructor performance” and “amount of work,” which accounted for 70% of the variance across items. Although correlation does not imply causation, this study re-confirms that engineering faculty seeking improved teaching evaluations should focus on improving instructional practices associated with content organization and delivery, or with instructor-student interactions, instead of worrying about the potential effects of perceived course workloads.
Proceedings of the 2004 American Society for Engineering Education Annual Conference & Exposition Copyright © 2004, American Society for Engineering Education
Dee, K. C. (2004, June), Reducing The Workload In Your Class Won’t Buy You Better Teaching Evaluation Scores: Re Refutation Of A Persistent Myth Paper presented at 2004 Annual Conference, Salt Lake City, Utah. 10.18260/1-2--13671
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2004 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015