Asee peer logo

Closing The Assessment Loop

Download Paper |


1998 Annual Conference


Seattle, Washington

Publication Date

June 28, 1998

Start Date

June 28, 1998

End Date

July 1, 1998



Page Count


Page Numbers

3.141.1 - 3.141.5

Permanent URL

Download Count


Request a correction

Paper Authors

author page

Joseph A. Shaeiwitz

Download Paper |

NOTE: The first page of text has been automatically extracted and included below in lieu of an abstract

Session 2613

Closing the Assessment Loop

Joseph A. Shaeiwitz West Virginia University

One of the purposes for having an outcomes assessment plan is continuous program improvement. An outcomes assessment plan has goals, measures, and feedback. Continuous program improvement can only be accomplished if the results obtained from the measures of achievement of the goals affect the education program. This is analogous to feedback control in which a measurement is compared to the set point (goals) and an adjustment is made upstream (within the program) to bring the measured property closer to the set point.

The act of closing the assessment loop or providing feedback to the program will probably be the most difficult aspect for engineering programs as they implement assessment plans to satisfy ABET Criteria 2000. Much of the assessment literature suggests that developing and agreeing upon goals is the most difficult aspect for faculty unaccustomed to discussing undergraduate education issues in great detail. However, the eleven goals in ABET Criteria 2000, Criterion 3, provide a “default” position for faculty unable to or who choose not to define their own set of goals.1 There is also an extensive literature on outcomes assessment measures used at a variety of schools.2-6 Closing the assessment loop will require a paradigm shift in faculty attitudes and behavior. Faculty must be receptive to results from outcomes measures that may suggest students have not achieved the desired outcomes. They must be willing to alter the curriculum and/or their teaching methods to ensure that students do achieve the desired outcomes.

In this paper, the experiences at West Virginia University, mostly within the Department of Chemical Engineering, are used as examples of how results of outcomes measures have been used for continuous program improvement.

Results from Design Projects

In the assessment plan in Chemical Engineering at West Virginia University, the primary assessment measure is a series of individual, senior design projects which students must defend in front of at least two faculty.5 The defense is a feedback mechanism for students. They learn immediately what they did well and what they could have done better. It is tantamount to a one- hour, individual tutorial by two faculty. Students routinely cite this as their most significant learning experience. After each project, the faculty involved prepare an assessment report. This report is used in two ways. First of all, it is the basis for the project review provided to the class, often over several class meetings. Aspects of the project that were done well are reinforced, and aspects that require improvement are emphasized, often through additional problem assignments.

Shaeiwitz, J. A. (1998, June), Closing The Assessment Loop Paper presented at 1998 Annual Conference, Seattle, Washington.

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 1998 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015