June 24, 2007
June 24, 2007
June 27, 2007
12.548.1 - 12.548.11
Direct Assessment Measures Introduction
Engineering programs have recently completed or are in the process of preparing for their second accreditation visit under the assessment-based criteria. Based on our combined experience of over 30 visits as both program evaluators and team chairs, it appears that many programs are struggling to identify valid measures for their program outcomes to come in full compliance with the requirements of Criterion 3 of the Engineering Criteria. This is substantiated by evidence of the relatively large number of citations for shortcomings relative to some aspect of this criterion.1 One cause of this is that many programs rely very heavily on surveys and similar indirect, or “soft,” measures of these outcomes. We believe that there is too much reliance on these indirect assessment measures and programs should endeavor to make direct assessment a cornerstone of their program improvement processes.
Indirect assessment measures include end-of-course surveys, graduation surveys, and alumni surveys in which an evaluation is based on opinion or self-reporting. These assessment measures are necessary but not sufficient, and some may be more appropriate for evaluation of objectives rather than demonstration of outcomes. Direct assessment by the faculty is necessary to provide an objective measure of students’ achievement of educational outcomes. All quality assessment plans should include direct assessment measures.
ABET is taking a harder line on the need for direct assessment as part of demonstrating that students have the required skills, knowledge, and attitudes comprised in the eleven ABET- designated outcomes. ABET has received significant negative feedback from engineering programs about the inconsistencies in program evaluator findings for compliance with Criterion 3. While it is true that, in the past, programs may have passed through accreditation visits with only indirect assessment (surveys, faculty opinions, ad hoc data) as evidence of student achievement in outcomes, it is likely that this will no longer be the case. Programs relying only on indirect measures for outcomes assessment will likely be cited with shortcomings in future accreditation visits.
This paper deals mainly with assessment of program outcomes—the knowledge, skills and attributes that students should demonstrate by the time of graduation. Therefore, the focus is campus- or curriculum-based assessment. This paper does not address program educational objectives, which describe career and professional accomplishments of program alumni. The former usually requires different evaluation and assessment tools than the latter; although, some overlap does exist. In addition, we will also refer to course objectives, which are not to be confused with program educational objectives.
Therefore, given this context and the imminent importance of using direct assessment methods, this paper provides a review of direct assessment measures. These methods include, but are not limited to, instructor end-of-course assessments, use of targeted assignments (assigned problems, exam questions, projects), capstone examinations (including the FE Exam), student portfolios, and use of capstone experiences.
Shaeiwitz, J., & Briedis, D. (2007, June), Direct Assessment Measures Paper presented at 2007 Annual Conference & Exposition, Honolulu, Hawaii. https://peer.asee.org/1537
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2007 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015