Washington, District of Columbia
June 23, 1996
June 23, 1996
June 26, 1996
2153-5965
8
1.9.1 - 1.9.8
10.18260/1-2--5950
https://peer.asee.org/5950
1016
Session 2532
A Criteria-Based Course and Instructor Evaluation System
David G. Meyer School of Electrical & Computer Engineering/Purdue University
ABSTRACT
This paper describes a criteria-based course and instructor evaluation system that has been recently deployed by the School of Electrical Engineering at Purdue University. The various evaluation forms are described along with the criteria used to evaluate both lecture and lab oriented courses. The software used to analyze the scannable forms and the variety of report formats generated are also described.
INTRODUCTION
In the early 1970’s, the School of Electrical & Computer Engineering (ECE) at Purdue University adopted a course & instructor evaluation system to be used in all courses (undergraduate and graduate, lecture and laboratory classes). The evaluation system adopted was based on a series of questions that students could respond to using a five-point scale, with answers ranging from "strongly agree" to "strongly disagree" (the Pur- due Center for Instructional Services has compiled a large set of such questions — referred to as the CAFETERIA System — from which "customized" course & instructor evaluation forms can be constructed). For its course & instructor evaluation forms, ECE chose a set of ten questions (eleven for the "lab" version of the form); this same set of questions was used for over twenty years. The analysis performed was fairly straight-forward: the mean of each question was computed on a five-point scale (with "5" → "strongly agree" and "1" → "strongly disagree"), and from equally weighted arithmetic averages of several of these means, three composite scores were computed: (1) an "instructor" score, (2) a "course" score, and (3) a "facilities" score. Associated with each composite score was a "percentile". A sample of the CAFETERIA-style form used is illustrated in Figure 1, and a sample of the report output produced appears in Figure 2.
MOTIVATION FOR CHANGE
Despite the virtue of simplicity, there was a significant amount of frustration among the ECE faculty concerning the CAFETERIA-style evaluation system, and perhaps in particular the kinds of questions used. A classic example is the author’s personal favorite: "My instructor explains difficult material clearly" (what this question is really gauging is the student’s ability to understand difficult material, and is perhaps more accurately rephrased as, "I am able to clearly understand difficult material"). Another example is: "My instructor is among the best teachers I have ever known". What is the difference between (simply) "agreeing" with this statement (scoring it "4") and "strongly agreeing" with it (scoring it "5")? And, teacher of what subject? — no focus is provided as to the comparison group that should be considered in formulating the response.
There was also confusion concerning how the composite scores were generated, i.e., which questions were used to calculate the "course" score, the "instructor" score, and the "facilities" score, as well as how the
Meyer, D. G. (1996, June), A Criteria Based Course And Instructor Evaluation System Paper presented at 1996 Annual Conference, Washington, District of Columbia. 10.18260/1-2--5950
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 1996 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015