Asee peer logo

Evaluating the Teaching Evaluations of 100 North American Schools

Download Paper |

Conference

2020 ASEE Virtual Annual Conference Content Access

Location

Virtual On line

Publication Date

June 22, 2020

Start Date

June 22, 2020

End Date

June 26, 2021

Conference Session

WIP-ing Up Faculty Development!

Tagged Division

Faculty Development Division

Page Count

5

DOI

10.18260/1-2--34601

Permanent URL

https://peer.asee.org/34601

Download Count

419

Request a correction

Paper Authors

biography

Haroon Malik Marshall University

visit author page

Dr. Malik is an Associate Professor at the Department of Computer Sciences and Electrical Engineering, Marshall University, WV, USA.

visit author page

biography

Wael A. Zatar Marshall University

visit author page

Dr. Zatar serves as the Dean of the College of Information Technology and Engineering at Marshall University.

visit author page

Download Paper |

Abstract

The paper is Work-in-Progress (WIP). Motivation - Assessing the teaching qualities of an instructor is an important but difficult and subjective task. The most widely applied tool to evaluate the professor’s/instructor’s performance in a course is by collecting numerical responses against a set of questionnaires, about the instructor and the course, along with comments in free-form text. The instructor uses this feedback for continuous self-improvement. The instructor’s department uses the evaluations as part of tenure and promotion review, peer mentoring and as part of a departments' own ongoing self-management. Institutions of higher learning use evaluations to determine whether faculty keep their jobs or get raises, their unreliability matters. A fundamental issue is that (a) the questionnaire provided in course evaluations are constrained by what the department/institution chooses to evaluate and not what student feel is important and (b) with few exceptions, this feedback is kept confidential and is shared with neither current nor prospective students. State-of-the-Art - With the emergence of web 4.0, many sites evolved, encouraging students to share their in-class experiences and the opinions about the professors. Among these sites, the one that is by far the most popular is RateMyProfessors1 (RMP), where students can anonymously rate different aspects of their professors (i.e., clarity, helpfulness, easiness), and also provide open-ended comments. The site currently has approximately 15 million evaluations for 1.4 million professors from 7,000 schools in the United States, Canada, and United Kingdom. Several studies demonstrate that students appear to have confidence in the RMP ratings and use the site to make academic decisions. With its importance, there has been a body of research analyzing the RMP data, but the existing research simply focused on the quantitative scores or considered only professors’ reviews. Paper Contribution - The paper initiates a new research approach that incorporates the verbal reviews as a means for the qualitative analysis with the quantitative score to (a) identify factors that impact students interests toward their universities, (b) qualitatively and quantitatively recognize the students concerns/complains towards engineering instructors, (c) extract what topics do student discuss when rating their North American engineering professors, (d) identify and quantify the gender bias in teaching evaluations and (e) reliably classify good professors from poor professors. Methodology - The authors extracted a hundred and twenty thousand student reviews, spanning over five years, for one hundred engineering schools across North America, (i.e., 20 Canada, 5 Mexico, and 75 USA) . Feature Concept Analytics (FCA) technique is used to extract the qualities/features of engineering instructors, as a tree structure, that students mostly talk/discuss in free-form text. The tree structure facilitates making a comparison of an instructor with others. The sentiment analysis technique is used to explore student's opinions about features, i.e., instructor qualities. Whereas, enumeration of the gender bias in teaching evaluations is achieved by applying the Weighted-Tree-Similarly algorithm on the extracted features and corresponding students’ sentiments. Latent Dirichlet Allocation (LDA), the probabilistic technique is used to identify the topics students’ discus when rating engineering professors. Preliminary Results - The preliminary results obtained show that the proposed methodology can identify students complains across seventy topics with 85% precession and 60% recall. The methodology reliably classifies good professors from poor with an accuracy of over 92%. The authors plan to provide a lightning talk to present the paper/results.

Malik, H., & Zatar, W. A. (2020, June), Evaluating the Teaching Evaluations of 100 North American Schools Paper presented at 2020 ASEE Virtual Annual Conference Content Access, Virtual On line . 10.18260/1-2--34601

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2020 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015