Austin, Texas
June 14, 2009
June 14, 2009
June 17, 2009
2153-5965
Getting Started: Objectives, Rubrics, Evaluations, and Assessment
New Engineering Educators
13
14.516.1 - 14.516.13
10.18260/1-2--4836
https://peer.asee.org/4836
741
Adrian Ieta holds a Ph.D. in Electrical Engineering (2004) from The University of Western Ontario, Canada. He also holds a B.Sc. in Physics from the University of Timisoara, Romania (1984), a B.E.Sc. in Electrical Engineering from the Polytechnical University of Timisoara (1992), and an M.E.Sc. from The University of Western Ontario (1999). He worked on industrial projects within the Applied Electrostatics Research Centre and the Digital Electronics Research Group at the University of Western Ontario and is an IEEE member and a registered Professional Engineer of Ontario. He taught at the University of Western Ontario and is now Assistant Professor at State University of New York at Oswego, Department of Physics.
Thomas E. Doyle holds a Ph.D. in Electrical and Computer Engineering Science (2006) from The University of Western Ontario, Canada. He also holds a B.E.Sc. in Electrical and Computer Engineering, a B.Sc. in Computer Science, and an M.E.Sc in Electrical and Computer Engineering from The University of Western Ontario. He worked on industrial projects with PlasSep Ltd, within the Applied Electrostatics Research Centre and the Digital Electronics Research Group at The University of Western Ontario and is an IEEE member and a registered Professional Engineer of Ontario. He taught at the University of Western Ontario and is currently Assistant Professor at McMaster University, Department of Electrical and Computer Engineering.
Effective criteria for teaching and learning
New faculty as well as experienced faculty may sometimes face challenges concerning teaching evaluations. Student perception of what is taught and what is learned may be significantly different from the instructor’s perception and intention. This may become a problem, since education institutions often use student evaluations of teaching as an important criterion for tenure, promotions, retention, or salary raise purposes. The argument goes that student ratings do not help instructors improve their perception of a class unless supported by professional advice. The questions that tend to be of special interest during the evaluation process are: "the course as a whole was...?"; "the course content was...?"; "the instructor's contribution to the course was...?"; "the instructor's effectiveness in teaching the subject matter was...?". In a previous study, we identified that, in fact, engineering students reacted to more particularly defined criteria associated with each question. Those criteria are confirmed by the present study and a quantitative measure can be established for them. A new hierarchy of the student perceived criteria is developed. We show that the SET questions are not testing independent variables but rather correlated ones. A strong correlation between three out of four SET questions has been confirmed and quantitatively assessed. We also report that students reveal triggering factors that override their normal criteria for assigning SET scores. Authors are hopeful that the study may be of interest to new and established engineering instructors. Furthermore, in order to increase the relevance of our conclusions we are planning to use this pilot study as a guideline for a broader research to be conducted at a handful of universities involving different engineering disciplines.
1. Introduction It is now generally acknowledged that the quality of education needs to be increased. While understanding students’ needs, instructors should fulfill them without compromising curriculum and educational goals. Effective criteria for teaching and learning are of interest to every instructor. However, the concepts of “effective teaching” and “learning” may often have different meanings for instructors and students. The effectiveness of the teaching and learning process is routinely evaluated by students with the purpose of making the teaching process more efficient. Student evaluations of teaching (SET) achieved by answering a standard or specific questionnaire is always a good feedback about students’ perception of the quality of instruction. As SET scores are often used for tenure, promotions, retention, and salary raise purposes they do have significant meaning for instructors. New faculty particularly may be well prepared scientifically but have little or no instruction on psychological issues related to teaching. Although at first glance it appears that the feedback will help instructors improve their class performance, studies show that student ratings are of little help to instructors if not supported by professional advice [1]. This demonstrates that students and instructors have different perceptions relative to instructional activities, which requires a scrutiny of students’ perception and reaction to specific standard questions.
Based on data collected at the University of Washington, Gillmore [2] supports the view that adequate instructor reliability rating is achieved in certain circumstances but is limited to similar conditions of measurement. On the other hand, SET scores may not be as reliable as they are thought to be, as some studies show that instructors can increase
Ieta, A., & Manseur, R., & Doyle, T. (2009, June), Effective Criteria For Teaching And Learning Paper presented at 2009 Annual Conference & Exposition, Austin, Texas. 10.18260/1-2--4836
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2009 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015