Austin, Texas
June 14, 2009
June 14, 2009
June 17, 2009
2153-5965
Getting Started: Objectives, Rubrics, Evaluations, and Assessment
New Engineering Educators
13
14.374.1 - 14.374.13
10.18260/1-2--5181
https://peer.asee.org/5181
420
Dr. Prusak is a Professor in the Department of Engineering at Central Connecticut State University in New Britain, CT. He teaches courses in Mechanical Engineering, Manufacturing Engineering Technology and Mechanical Engineering Technology programs. He has over 10 years of international industrial and research experience in the fields of precision manufacturing, design of mechanical and manufacturing systems and metrology. Dr. Prusak received M.S. Mechanical Engineering from Technical University of Krakow and his Ph.D. in Mechanical Engineering from University of Connecticut. E-mail: PrusakZ@ccsu.edu
Course Learning Outcomes and Student Evaluations – Can Both Be Improved?
Abstract
This paper describes successful and unsuccessful activities used in engineering technology courses, as well as the relation of these activities to student learning and evaluation of their knowledge. The information presented is based on fifteen years of systematic student evaluations of engineering technology courses. The course evaluations were designed specifically to target areas of interest from the perspectives of learning outcomes and student perceptions. Relationships between learning outcomes and various course activities are correlated using Quality Function Deployment. As these activities should take advantage of various learning styles, they are related to concepts of Multiple Intelligences. The successes and failures of some of these activities are evaluated based on input from student course evaluations and faculty observation. Usefulness of typical questions asked on student evaluations is examined along with a list of major problems with student evaluations. Practical suggestions for developing personal, outcomes-oriented course evaluations are given with a list of useful questions that require more fact-based answers and are less affected by students’ perceptions. A list of successful and unsuccessful course activities, including the ever subjective issue of grading is provided. A simple validation tool for student evaluations is also proposed.
Introduction
Student evaluations of teaching have been investigated extensively, especially in the past three decades and reported in hundreds of publications. Reliability, validity and bias have been reported with varying conclusions, and usefulness of the evaluations, or their certain parts, has been both acknowledged and questioned. Prevailing common sense beliefs among faculty often contradict these conclusions, and many engineering educators can show their own data supporting and questioning general conclusions from evaluations. Several studies cited by Dee 1, 2 show little to no relationship between course workload and faculty performance rating or overall course quality. However, a relationship or lack thereof does not imply causation. In her studies she assumes that student evaluations represent their opinions reliably and validly. That is still a long way from a true representation of the actual quality of the course. Perceptions about a fact, especially when expressed by people who are not yet qualified to make sound judgments, has a limited validity or none at all. That brings an issue of which questions from an evaluation of faculty and a course the students are really prepared to answer?
Ponton et al. wrote that “theories of cognitive motivation assert that to provide maximum self- motivation, specific and challenging goals should be adopted that, if accomplished, will lead to personally satisfying outcomes” 3. Student evaluations of faculty and courses tend to be a measure of satisfaction – a notoriously inconsistent and ever changing metric.
However, the basic questions are: (1) a measure of satisfaction with what? (2) how does it truly relate to what the said evaluations are supposed to measure?
Prusak, Z. (2009, June), Course Learning Outcomes And Student Evaluations: Can Both Be Improved? Paper presented at 2009 Annual Conference & Exposition, Austin, Texas. 10.18260/1-2--5181
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2009 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015