Albuquerque, New Mexico
June 24, 2001
June 24, 2001
June 27, 2001
6.706.1 - 6.706.12
Measuring Continuous Improvement In Engineering Education Programs: A Graphical Approach*
Graciela de L. Perez, Larry Shuman, Harvey Wolfe and Mary Besterfield-Sacre University of Pittsburgh
This paper presents a method for developing assessment metrics that can be used to efficiently reduce survey data to a format that facilitates quick and accurate faculty feedback as part of an EC 2000 continuous improvement process. Our methodology, the Pitt-SW Analysis, is an adaptation of the competitive strategy principle of SWOT (strength, weakness, opportunities and threats). It consists of four steps – data collection, data summarization, display of proportions, and construction of a Strengths and Weakness (SW) table by the application of rules that reflect the desired sensitivity of the methodology. The results of the SW table can be displayed graphically using basic symbols to highlight and track changes in students’ perceptions. In this way, student progress towards meeting the program’s EC 2000 objectives can be monitored and fed back to faculty. We have tested the method using 1999 and 2000 academic year data to track four student cohorts. The results have been highly consistent and indicate the usefulness of this methodology to efficiently measure student performance.
For the past five years ABET, with its EC 2000, has directed undergraduate engineering faculty to implement a continuous improvement process. Following the setting of objectives, an early step in this process has often focused on data collection, typically using surveys to collect outcome information. As a consequence, faculty now find themselves with the task of interpreting a large amount of data while trying not to be overwhelmed with information that, in its present form, may have limited assessment value.
While the concept of continuous improvement may be new to the engineering academic culture , the art (and science) of data analysis is not. We possess a number of techniques for organizing data and developing metrics to assess performance, identify areas of weakness and design potential improvements. The challenge is to design data-driven metrics that are stable and robust for small populations and sample sizes, cost effective and easy to use for decision- making. Further, we have found that a well-designed survey can provide very valuable information about students’ attitudes and perceptions . Such surveys also can be viewed as a
* This paper supported in part by National Science Foundation grant: EEC-9872498, Engineering Education: Assessment Methodologies and Curricula Innovations and Engineering Information Foundation grant EiF 98-4: Perception versus Performance: The Effects of Gender and Ethnicity Across Engineering Schools.
Proceedings of the 2001 American Society for Engineering Education Annual Conference & Exposition Copyright 2001, American Society for Engineering Education
Shuman, L., & Perez, G., & Besterfield-Sacre, M. E., & Wolfe, H. (2001, June), Measuring Continuous Improvement In Engineering Educational Programs: A Graphical Approach Paper presented at 2001 Annual Conference, Albuquerque, New Mexico. 10.18260/1-2--9539
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2001 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015