June 22, 2008
June 22, 2008
June 25, 2008
Educational Research and Methods
13.658.1 - 13.658.8
Guaranteeing Achievement of Program Educational Outcomes While Providing Data for Program Improvement
A direct assessment approach for engineering program outcomes has been developed that ensures all students meet all the outcomes at a threshold level. At the same time, the approach can be used as part of a strategy for continual improvement of the program. The approach is described and an example of the assessment of one of the program outcomes within a single course is described. Finally, the program’s response to the data obtained by the assessment process is discussed.
The two primary types of assessment (formative and summative) of the student outcomes of engineering programs may be at odds. Summative assessment is designed to ensure that students are attaining the outcomes, while formative assessment is a program-level attempt to improve student attainment of outcomes1. That is, summative assessment asks whether students make it over the bar, while formative assessment is part of the self-improvement process of a program. The ABET Engineering Criteria 2000 require programs to undergo both formative and summative assessment, resulting in wide-ranging changes in engineering education2. However, self-improvement suggests lack of attainment of outcomes, so the need to demonstrate successful summative assessment could potentially hinder attempts at formative assessment. In this paper, a summative assessment process that can actually improve formative assessment is discussed.
Engineering programs use a wide range of strategies to overcome this potential problem. Here, an assessment strategy used by a civil engineering program is presented which uses direct assessment of student work to ensure that all students meet all outcomes at a threshold level. The same assessment tool combined with indirect assessments can also provide data for program self-improvement. In this section, the assessment strategy is discussed, in the following section an example of the assessment of an outcome and the planned program response to it is given, and in the final section conclusions are drawn.
For each program outcome, several performance criteria3 were developed using verbs based on Bloom’s taxonomy4, 5. Bloom’s taxonomy comprises six levels (knowledge, comprehension, application, analysis, synthesis, and evaluation), in which each level assumes attainment of the lower levels. By basing the performance criteria on verbs tied to Bloom’s taxonomy, it is possible to gain precision regarding the level of ability expected from students for each performance criterion. Lists of active verbs describing actions students are able to do at each of Bloom’s levels have been developed, for example, for use in “enumerating various attributes” for the ABET Criterion 3 outcomes5. In that project, each of the ABET outcomes [i.e., the Criterion 3 (a) through (k)] was broken into a larger number of component parts, and each component part of an outcome was described as it might be addressed by a student operating at the various levels
Crago, R. (2008, June), Guaranteeing Achievement Of Program Educational Outcomes While Providing Data For Program Improvement Paper presented at 2008 Annual Conference & Exposition, Pittsburgh, Pennsylvania. 10.18260/1-2--3405
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2008 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015