June 26, 2011
June 26, 2011
June 29, 2011
Electrical and Computer
22.1119.1 - 22.1119.18
On the implementation of ABET feedback for program improvement Kostas Tsakalis, Stephen Phillips, Ravi Gorur School of Electrical Computer and Energy Engineering Arizona State UniversityAbstractThe ABET accreditation process calls for feedback to be an integral part of continuousimprovement of education programs. Considerable freedom is allowed in the implementation ofthis process and how the data is collected, quantified, and interpreted. Combining this with thenaturally high variability of the education process, the lack of unified and accepted performancemetrics and outcome definitions, result in a formidable yet quite interesting feedback problem.In this study, we present the approach taken by the School of ECEE at Arizona State University toformalize, quantify, and automate to the greatest possible extent the data collection, action,and evaluation of the feedback and continuous improvement process. We follow the “two loopABET process” (an objectives loop and an outcomes loop) where the academic unit defines itsown program objectives that are regularly evaluated and possibly revised by the programconstituents: faculty, students, alumni, local community and industry. The evaluation of howwell the program objectives are met is accomplished through regular meetings and responses toquestionnaires. We quantify these responses with an adjustment of the target values of theprogram outcomes. Despite the fact that it is naturally abstract and vague, and some nontrivialeffort must be spent on the development of the questionnaires and their correspondence withthe program outcomes, the implementation of this loop is relatively straightforward.The second, and arguably more interesting part of the cycle is the periodic assessment of theprogram outcomes, and the implementation of actions and policies to affect the outcomes in adesired direction. We approach this by creating a sampling mechanism through standardizedtests and questionnaires (Rubrics) to quantify in a reliable manner the assessment and datacollection process. The data is then used to automatically compute quantitative actions(typically expressed in instruction effort) that are to be implemented during classroominstruction and aim to minimize the difference between assessed outcomes and targetoutcomes. The difficulties in this process lie in several distinct planes. One is the definition ofquantitative and precise metrics that reflect changes in the program. A second is the datacollection and the action definitions that should minimize or, at least, allow the resolution ofinterdependencies and correlations among them. While these form an intellectually interestingmodeling and feedback problem, one must also be prepared to accommodate some facultyresistance, indifference, or simply lack of time to perform such tasks. Viewing automation andconsistency as a key for the success of the continuous improvement process, we haveimplemented this feedback process for the last four years and here we present some of ourexperiences.
Phillips, S. M., & Tsakalis, K., & Gorur, R., & Philips, S. M. (2011, June), On the Implementation of ABET Feedback for Program Improvement Paper presented at 2011 ASEE Annual Conference & Exposition, Vancouver, BC. 10.18260/1-2--18560
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2011 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015