June 22, 2008
June 22, 2008
June 25, 2008
Software Engineering Constituent Committee
13.1411.1 - 13.1411.10
World-Class Outcomes Assessment on a Shoestring
In the fall of 2004, the software engineering faculty at the University of Wisconsin - Platteville developed a set of outcomes and assessment procedures using the principles presented at an ABET Faculty 2.0 Workshop. In addition, we have taken extra steps to improve the quality of the assessment process and reduce the effort required. The result is a tightly specified assessment process that allows us to achieve quality assessment results at a reasonable expenditure of faculty time and effort.
Criterion three of the ABET engineering accreditation process states that program outcomes must be assessed with evidence that the results of this assessment process are applied to the further development of the program.3 Anecdotally, many who go through the ABET accreditation process view this criterion as the most problematic. Moreover, satisfying this criterion usually requires significant ongoing efforts.
In the fall of 2004, the software engineering faculty at the University of Wisconsin - Platteville developed a set of outcomes and assessment procedures using the principles presented at an ABET Faculty 2.0 Workshop.4 These outcomes assessment procedures share a number of common practices delineated by other authors who have chronicled their experiences with ABET outcomes assessment.2,4,6 Our particular instantiation has:
• Seven program outcomes with two to five performance criteria established for each outcome. • Two to four measurements for each performance criterion, with at least one direct and one indirect measurement for each. The direct measurements consist of in-course assessments and direct observation. The indirect measurements consist of course surveys and graduating senior exit surveys. • A fixed set of rubrics for each of the measurements • Semester assessment reports summarizing assessment data, identifying problem areas, suggesting improvements, noting where changes due to assessment lead to improvements, and suggesting changes to the assessment process
In addition to the above, we have taken a few extra steps to improve the quality of the assessment process and reduce the effort required. We have specified how each measurement will be performed. This specification helps ensure that a given measurement is performed reliably regardless of the semester or instructor. We have normalized each rubric to produce a numerical result from one to five for each student. We have created automated trigger values based on three sets of criteria that flag those measurements that are below desired levels and therefore require further analysis. These steps go beyond those discussed by other authors. 2,4,6
Clifton, J., & Hasker, R., & Rowe, M. (2008, June), World Class Outcomes Assessment On A Shoestring Paper presented at 2008 Annual Conference & Exposition, Pittsburgh, Pennsylvania. https://peer.asee.org/3417
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2008 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015