Salt Lake City, Utah
June 20, 2004
June 20, 2004
June 23, 2004
2153-5965
10
9.327.1 - 9.327.10
10.18260/1-2--13186
https://peer.asee.org/13186
363
Session 3431 Comprehensive Program Assessment: The Whys and Wherefores
Carole Goodson, Luke Faulkenberry, Susan Miertschin, and Barbara Stewart
University of Houston
Introduction
Many faculty view program evaluation as a strenuous process, something imposed by a higher authority, another hoop to jump through, and of little real benefit. In fact, there are a number of reasons to undertake some level of program evaluation. First, evaluation is required by entities external but, nonetheless, important to the academic institution, including accrediting agencies. Most academic institutions also have internal plans and evaluation requirements directed at assuring quality of programs and services. Evaluation data can make a case with decision makers for increased support for under-resourced areas.
While evaluation is then imposed on faculty by various authorities, it is also a matter of professional integrity. Faculty members want to deliver good programs that enable their students to gain secure, stimulating and satisfactorily remunerative employment, as well as ensure employers of the competence and potential of program graduates. Evaluating programs allows faculty to reflect, to better understand how a program is working, and where it is headed. It enables faculty to catch potential problems related to curriculum early and make corrections before more serious problems occur. Evaluation driven by faculty integrity spawns continual program improvement, which helps to establish best practices that can be passed on to others.
Thus, while evaluation can be viewed as onerous, most faculty members are engaged in some form of program evaluation. Often evaluation efforts are disconnected and small and specific in focus. What is needed is a system for collecting, compiling, and warehousing data in a planned, consistent and methodical way. Once data gathering and warehousing are systematized, analysis and review can take place, after which action can be based on the information.
During the 2002-2003 academic year, the Assessment and Continuous Improvement Committee (ACI) of the College of Technology at the University of Houston was formed representing faculty in diverse program areas. The committee was tasked with planning and implementing a broad program assessment and continuous improvement process for the College. The ACI Committee defined the overall committee goal as follows: “Develop a process for acquiring information that will help programs excel, endure and become stronger.”
The paper describes processes employed in developing the assessment system. The system to date consists of a set of assessment goals, multiple indicators for each goal, ways to measure attainment of an indicator, and a phased implementation plan. In this paper, particular emphasis
Proceedings of the 2004 American Society for Engineering Education Annual Conference & Exposition Copyright © 2004, American Society for Engineering Education
Stewart, B., & Goodson, C., & Miertschin, S., & Faulkenberry, L. (2004, June), Comprehensive Program Assessment: The Whys And Wherefores Paper presented at 2004 Annual Conference, Salt Lake City, Utah. 10.18260/1-2--13186
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2004 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015