Asee peer logo

Evaluating Instructional Scholarship In Engineering

Download Paper |

Conference

2008 Annual Conference & Exposition

Location

Pittsburgh, Pennsylvania

Publication Date

June 22, 2008

Start Date

June 22, 2008

End Date

June 25, 2008

ISSN

2153-5965

Conference Session

Best of the NEE

Tagged Division

New Engineering Educators

Page Count

17

Page Numbers

13.577.1 - 13.577.17

DOI

10.18260/1-2--3147

Permanent URL

https://peer.asee.org/3147

Download Count

395

Request a correction

Paper Authors

biography

Norman Fortenberry National Academy of Engineering

visit author page

Norman Fortenberry is the founding director of the Center for the Advancement of Scholarship on Engineering Education (CASEE) at the National Academy of Engineering. CASEE is a collaborative effort dedicated to achieving excellence in engineering education--education that is effective, engaged, and efficient. CASEE pursues this goal by promoting research on, innovation in, and diffusion of effective models of engineering education.

visit author page

biography

Tylisha Baber Michigan State University

visit author page

At the time this paper was written, Dr. Tylisha Baber was serving as a National Academies Christine Mirzayan Science and Technology Policy Fellow. She earned a B.S. degree in chemical engineering from North Carolina State University and a Ph.D. in chemical engineering from Michigan State University. Tylisha’s dissertation focused on the design and implementation of a biomass conversion process for improving the fuel properties of biodiesel. She is currently an adjunct assistant professor in the Department of Mechanical and Chemical Engineering at North Carolina A&T State University.

visit author page

Download Paper |

Abstract
NOTE: The first page of text has been automatically extracted and included below in lieu of an abstract

Evaluating Instructional Scholarship in Engineering

Abstract

There is a window of opportunity to engage the engineering community in a discussion of the metrics by which to measure scholarly teaching. We place emphasis on those summative metrics that could be used by administrators in judging new faculty rather than on strategies that would formatively help new faculty to document their teaching results and improve their practice. Our view is that unless a discussion is held on the potential evaluative metrics, it will be difficult to achieve the desired level of attention to the importance of high quality teaching. Therefore, it is in the long-term interests of new faculty that this precursor discussion targeting administrators be held.

Teaching is a multifaceted activity that is best evaluated using multiple measurement techniques and criteria. In general, there are six key steps in the development of a highly reproducible instructional evaluation system: 1. Determine the purpose of the evaluation; 2. Define the aspects/dimensions of teaching to be evaluated; 3. Identify valid sources of data or evidence for each aspect of teaching being evaluated; 4. Specify the criteria, or measuring instrument, by which the aspects will be judged; 5. Analysis and interpretation of data by skilled, trained personnel; and 6. Set weights, or scoring mechanism, for each aspect of teaching being evaluated. The first five of these steps are examined in this paper; the sixth step is left for future work.

In this paper, the sources, types, reliability and validity of data used for summative evaluations, including peer review, student ratings, and self-assessment, were examined.

I. Background

Boyer describes at least four forms of scholarship: discovery, synthesis, application, and teaching[1]. Our focus here is on the scholarship of teaching, discussion of which is often done with comparisons to the scholarships of discovery and synthesis. It is generally accepted that evaluation of discovery and synthesis within academic departments depends heavily on “counting” publications in refereed journals. The major debates center on the significance of the publications and the quality of the journals([2], p. 40-41).

The scholarship of teaching has been refined by Shulman into “scholarship of teaching and learning” and “scholarly teaching”[3]. The former is essentially the scholarship of discovery within the domain of education[4]. Our focus here is on the latter, “scholarly teaching” which is distinguished from “teaching” by its focus on teaching practice and learning outcomes, grounded in disciplinary content and pedagogic knowledge, reflective critique, commitment to communication to peers and openness to peer evaluation([2], pp. 87-88). Scholarly teaching holds the promise of enhanced student learning through rigorous faculty attention to learning. Because tenure and promotion depend upon evaluations of scholarship and because compared to evaluation of the scholarships of discovery and synthesis scholarly teaching is more difficult to assess, faculty may perceive few incentives to devote their efforts to scholarly teaching([2], p 14), [5].

Fortenberry, N., & Baber, T. (2008, June), Evaluating Instructional Scholarship In Engineering Paper presented at 2008 Annual Conference & Exposition, Pittsburgh, Pennsylvania. 10.18260/1-2--3147

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2008 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015