Asee peer logo

Grade Based Correlation Metric To Identify Effective Statics Instructors

Download Paper |


2010 Annual Conference & Exposition


Louisville, Kentucky

Publication Date

June 20, 2010

Start Date

June 20, 2010

End Date

June 23, 2010



Conference Session

Student Learning and Assessment

Tagged Division

Mechanical Engineering

Page Count


Page Numbers

15.628.1 - 15.628.17



Permanent URL

Download Count


Request a correction

Paper Authors


Randall Manteufel University of Texas, San Antonio

visit author page

Dr. Randall D. Manteufel is Associate Professor of Mechanical Engineering at The University of Texas at San Antonio where he has taught since 1997. He received his Ph.D. degree in Mechanical Engineering from the Massachusetts Institute of Technology in 1991. His teaching and research interests are in the thermal sciences. He is the faculty advisor for ASHRAE at UTSA. Manteufel is a fellow of ASME and a registered Professional Engineer (PE) in the state of Texas.

visit author page


Amir Karimi University of Texas, San Antonio

visit author page

Amir Karimi is a Professor of Mechanical Engineering and the Associate Dean of Undergraduate Studies at The University of Texas at San Antonio (UTSA). He received his Ph.D. degree in Mechanical Engineering from the University of Kentucky in 1982. His teaching and research interests are in thermal sciences. He has served as the Chair of Mechanical Engineering (1987 to 1992 and September 1998 to January of 2003), College of Engineering Associate Dean of Academic Affairs (Jan. 2003-April 2006), and the Associate Dean of Undergraduate Studies (April 2006-present). Dr. Karimi is a Fellow of ASME, senior member of AIAA, and holds membership in ASEE, ASHRAE, and Sigma Xi. He is the ASEE Campus Representative at UTSA, ASEE-GSW Section Campus Representative, and served as the Chair of ASEE Zone III (2005-07). He chaired the ASEE-GSW section during the 1996-97 academic year.

visit author page

Download Paper |

NOTE: The first page of text has been automatically extracted and included below in lieu of an abstract

Metrics for Instructor Effectiveness Based on Student Success in Courses Abstract

Grade-based metrics are used to gauge instructor effectiveness. The final grade distributions for 24 classes of engineering statics, taught by 10 instructors over a five-year period are evaluated. A null hypothesis is that the grade point average (GPA) is no different than that issued by other instructors for the same course. In two cases, the null hypothesis is rejected, showing that one instructor is distinctly more lenient and one is harsher in their grade distributions. Data shows there can be significant class to class GPA variation for the same instructor, so class GPA is not proposed as a sufficient metric of an instructor’s effectiveness. Students passing statics are tracked into three follow-on engineering courses: dynamics, solid mechanics, and thermodynamics. A correlation coefficient based on the statics grade and follow-on grade is proposed as a better measure of the statics instructor’s effectiveness. The null hypothesis is that there is no difference between grade correlations for the statics instructors. The null hypothesis can’t be rejected in most cases, implying that this metric doesn’t identify which statics instructor is better at preparing students for subsequent courses. Although the correlations are weak, trends are discernable where students who succeed in passing statics taught by an instructor who has a reputation of being more rigorous, do better in the follow-on courses. At best, the grade-based correlation metric explains up to 25% of the future grade success in follow-on engineering courses for the most effective statics instructors.


There is much discussion of the need to continuously improve our programs, curriculum, and courses1. The improvement is driven by assessments, evaluations, and feedback from both inside and outside the college. Feedback from employers, national associations2 and leaders from the community frequently provide high-level guidance to improve engineering programs. One consistent theme is that the program and course needs to be preparing students with the right skills and capabilities to succeed in their future endeavors. It appears logical that foundational engineering courses prepare students with the fundamentals needed to succeed in subsequent courses. End of semester grades are the ultimate measure of a student’s success in a class, which is assumed to be highly correlated with the learning (defined as the knowledge, skills, abilities and attitudes2) achieved by the student by the end of the course.

Although grades are used to assess student performance, there appears to be little use of grade-based correlations to identify instructors that do a better job of instruction in fundamental courses3. A survey of strategies to measure teaching effectiveness4 lists 12 possibilities: student ratings, peer ratings, self-evaluation, videos, student interviews, alumni ratings, employer ratings, administrator ratings, teaching scholarships, teaching awards, learning outcomes, and teaching portfolio. Of these, the tracking of student grades in

Manteufel, R., & Karimi, A. (2010, June), Grade Based Correlation Metric To Identify Effective Statics Instructors Paper presented at 2010 Annual Conference & Exposition, Louisville, Kentucky. 10.18260/1-2--16931

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2010 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015