June 20, 2010
June 20, 2010
June 23, 2010
15.628.1 - 15.628.17
Metrics for Instructor Effectiveness Based on Student Success in Courses Abstract
Grade-based metrics are used to gauge instructor effectiveness. The final grade distributions for 24 classes of engineering statics, taught by 10 instructors over a five-year period are evaluated. A null hypothesis is that the grade point average (GPA) is no different than that issued by other instructors for the same course. In two cases, the null hypothesis is rejected, showing that one instructor is distinctly more lenient and one is harsher in their grade distributions. Data shows there can be significant class to class GPA variation for the same instructor, so class GPA is not proposed as a sufficient metric of an instructor’s effectiveness. Students passing statics are tracked into three follow-on engineering courses: dynamics, solid mechanics, and thermodynamics. A correlation coefficient based on the statics grade and follow-on grade is proposed as a better measure of the statics instructor’s effectiveness. The null hypothesis is that there is no difference between grade correlations for the statics instructors. The null hypothesis can’t be rejected in most cases, implying that this metric doesn’t identify which statics instructor is better at preparing students for subsequent courses. Although the correlations are weak, trends are discernable where students who succeed in passing statics taught by an instructor who has a reputation of being more rigorous, do better in the follow-on courses. At best, the grade-based correlation metric explains up to 25% of the future grade success in follow-on engineering courses for the most effective statics instructors.
There is much discussion of the need to continuously improve our programs, curriculum, and courses1. The improvement is driven by assessments, evaluations, and feedback from both inside and outside the college. Feedback from employers, national associations2 and leaders from the community frequently provide high-level guidance to improve engineering programs. One consistent theme is that the program and course needs to be preparing students with the right skills and capabilities to succeed in their future endeavors. It appears logical that foundational engineering courses prepare students with the fundamentals needed to succeed in subsequent courses. End of semester grades are the ultimate measure of a student’s success in a class, which is assumed to be highly correlated with the learning (defined as the knowledge, skills, abilities and attitudes2) achieved by the student by the end of the course.
Although grades are used to assess student performance, there appears to be little use of grade-based correlations to identify instructors that do a better job of instruction in fundamental courses3. A survey of strategies to measure teaching effectiveness4 lists 12 possibilities: student ratings, peer ratings, self-evaluation, videos, student interviews, alumni ratings, employer ratings, administrator ratings, teaching scholarships, teaching awards, learning outcomes, and teaching portfolio. Of these, the tracking of student grades in
Manteufel, R., & Karimi, A. (2010, June), Grade Based Correlation Metric To Identify Effective Statics Instructors Paper presented at 2010 Annual Conference & Exposition, Louisville, Kentucky. https://peer.asee.org/16931
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2010 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015