June 16, 2002
June 16, 2002
June 19, 2002
7.85.1 - 7.85.10
Main Menu Session 3560
A Potential Barrier to Completing the Assessment Feedback Loop
Ed Furlong, Promod Vohra Northern Illinois University
Northern Illinois University’s College of Engineering and Engineering Technology employs a comprehensive nine-component assessment model. Each element in the assessment model (Pre- test, Post-test, and Portfolio; Standardized Testing; Student and Faculty Surveys; Student Internships and Cooperative Work Performance; the Capstone Experience; Student Placement Information; Employer Surveys; Alumni Participation; and Peer Review of the Curriculum) provides a mechanism for data collection.
Within the context of our assessment model, this paper details strategies for analyzing and using assessment results as feedback directed toward the improvement of total program quality. Incorporating feedback into the assessment process is often difficult. Assuming the measurement of selected learning outcome criteria is both valid and reliable, benchmarks for acceptable performance must be established and decision rules that provide a basis for detecting meaningful differences must be formulated. And these tasks are conducted in a policy environment where the implementation of affirmative steps may be constrained by numerous internal and external stakeholders.
One of the most fundamental problems with assessment research involves how assessment results are to be placed within a meaningful comparative context. Any analysis of assessment results involves ascertaining the significance of differences from an established performance baseline, a performance goal, or other criteria. The significance of any comparisons that are made may be evaluated using statistical and/or substantive criteria. This paper will explore the potential and limits of statistical analysis, particularly as both relate to the concept of statistical power in survey research, and discuss several strategies for dealing with the problems posed by inadequate numbers of respondents.
In every academic program, teaching practices and student learning have always been important issues to consider. However, the overall thrust of assessment has changed markedly. During the past decade, the focus of assessment has shifted from input variables (resources committed), to output variables (counting students and graduates), to learning and performance outcomes (highly specific criterion-related competencies). And as is often the case, academic programs, notorious for their conservatism and procrastination, have been compelled to shift their assessment efforts from inputs to outcomes due to the higher levels of accountability demanded by external stakeholders such as university governing bodies, accreditation agencies, state boards of higher education, and the industrial consumers of trained students. The groundswell for increased accountability in higher education has also filtered upward from concerned students and parents, to interested state legislators, state legislatures, and executive branch agencies from
Proceedings of the 2002 American Society for Engineering Education Annual Conference & Exposition Copyright © 2002, American Society for Engineering Education
Vohra, D. P. (2002, June), A Potential Barrier To Completing The Assessment Feedback Loop Paper presented at 2002 Annual Conference, Montreal, Canada. 10.18260/1-2--10323
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2002 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015