St. Louis, Missouri
June 18, 2000
June 18, 2000
June 21, 2000
5.674.1 - 5.674.4
Triangulating Assessments: Multi-Source Feedback Systems and Closed Form Surveys
Mary Besterfield-Sacre, Larry Shuman, Harvey Wolfe University of Pittsburgh and Jack McGourty Columbia University
Triangulation is becoming an important factor as more engineering programs begin to prepare for accreditation under ABET’s EC 2000 criteria.. In general, the purpose of triangulation in assessment and evaluation is to provide multiple measures for a particular outcome. For example, the ‘ability to work on multi-disciplinary teams’ may be assessed through: (1) the student’s self assessment of their enjoyment for working on teams via closed-form questionnaires, (2) ratings by a student’s peers on the team, or (3) the direct observation of a team by a trained evaluator. Triangulation may also involve using similar metrics across two or more institutions so that results may be compared. Because many of the methods and instruments currently begin used in engineering education have not been fully validated in terms of content or construct, triangulation provides one means for increasing the validity of the outcome’s measurements, or, conversely, increasing the validity of the methodology used to obtain the measurement. Further, it is also possible that a metric/method that adequately measures a particular outcome in question does not exist. In this case, by triangulating different methods and metrics, one obtains multiple surrogates for the real measure of the outcome, thus providing a much needed anchor measure where none exists.
Once results from triangulation have been obtained, statistical methods may be used to determine the relationships among the various metrics. If there is strong correlation among the metrics, then the use of multiple measures may be reduced. Those metrics/measures that are more efficient and cost effective could then be used to routinely assess students’ progress on an outcome(s). The more in-depth, and often more costly metrics could then be used only periodically or with samples of the students. This approach helps to minimize costs, and provides a streamlined approach towards program evaluation.
This work-in-progress paper discusses and compares a triangulation experiment comparing two forms of assessment – multi-source feedback systems and closed form (attitudinal) surveys. Specifically, we are conducting a longitudinal triangulation experiment involving students from the University of Pittsburgh, Department of Industrial Engineering. Our experiment began in the fall 1999 semester when the students were in their first semester, sophomore year and will continue through the fall of 2000 when the students complete the first semester of their junior year. This experiment is part of a larger research project, in which we are evaluating the information obtained when multiple methods are used on a cohort of industrial engineering students who are being tracked from the beginning of their sophomore year until graduation. Overall, we are investigating four different methods for measuring outcomes: questionnaires, multi-source feedback, concept maps, and intellectual development. The purpose of the study
Besterfield-Sacre, M. E., & Shuman, L. J., & McGourty, J., & Wolfe, H. (2000, June), Triangulating Assessments: Multi Source Feedback Systems And Closed Form Paper presented at 2000 Annual Conference, St. Louis, Missouri. https://peer.asee.org/8783
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2000 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015