Virtual On line
June 22, 2020
June 22, 2020
June 26, 2021
Electrical and Computer
13
10.18260/1-2--35023
https://peer.asee.org/35023
465
Bahar Memarian is a PhD candidate in Industrial Engineering and the Collaborative Specialization in Engineering Education at University of Toronto, Canada. Her primary research interests are in Human Factors and Systems Engineering, specifically their application in Education (e.g. learning outcomes assessment, engineering problem solving). Before that, she completed her MASc. (2015) and BASc. (2012) in Electrical Engineering from University of Toronto.
Susan McCahan is a Professor in the Department of Mechanical and Industrial Engineering at the University of Toronto. She currently holds the positions of Vice-Provost, Innovations in Undergraduate Education and Vice-Provost, Academic Programs. She received her B.S. (Mechanical Engineering) from Cornell University, and M.S. and Ph.D. (Mechanical Engineering) from Rensselaer Polytechnic Institute. She is a Fellow of the American Association for the Advancement of Science in recognition of contributions to engineering education has been the recipient of several major teaching and teaching leadership awards including the 3M National Teaching Fellowship and the Medal of Distinction in Engineering Education from Engineers Canada.
The work presented is intended for a poster presentation. In assessment of engineering problem solving skill, grading schemes and sometimes rubrics are used. Common notion of reliability is largely based on the inter-rater rating (IRR), rather than inter-coder rating (ICR) of the feedback delivered. Evaluation culture is however shifting from a measurement of knowledge perspective to an assessment for learning domain. Reliability should therefore be understood in the realm of this new domain and through ICR methods. The ICR of data collected on evaluations made by a novel formative feedback instrument is investigated in this study. This instrument was previously proposed and tested by the authors and the findings are drawn from the same data set. The ICR of the instrument’s indicators are analyzed. In addition, the correlations between each of solution quality and assessor sample size is examined against the ICR of instrument’s indicators. Data used was collected from counter-balanced simulated evaluation trials (n = 33 assessors). The collected data is coded and grouped based on identical solution profiles. Following findings of literature, only solution profiles having percentage agreement of 55% or higher are used in the analysis. Findings show that 4 out of 6 indicators of CAIR have a percentage agreement of 55% or higher across a large collection of evaluated electrical engineering solution profiles examined. This is in agreement with literature which reports ICR of 55% to 75% as acceptable. The results of correlational analysis confirms that percentage agreement on most of instrument’s criteria is uncorrelated with both assessor sample size and quality of solutions.
Memarian, B., & McCahan, S. (2020, June), Outcomes-based Assessment Instrument for Engineering Problem-solving Skills Paper presented at 2020 ASEE Virtual Annual Conference Content Access, Virtual On line . 10.18260/1-2--35023
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2020 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015