Asee peer logo

An Inquiry Into the Use of Intercoder Reliability Measures in Qualitative Research

Download Paper |

Conference

2019 ASEE Annual Conference & Exposition

Location

Tampa, Florida

Publication Date

June 15, 2019

Start Date

June 15, 2019

End Date

June 19, 2019

Conference Session

ERM Technical Session 1: Methods Refresh: Approaches to Data Analysis in Engineering Education Research

Tagged Division

Educational Research and Methods

Tagged Topic

Diversity

Page Count

15

Permanent URL

https://peer.asee.org/32067

Download Count

27

Request a correction

Paper Authors

biography

Amy Wilson-Lopez Utah State University

visit author page

Amy Wilson-Lopez is an associate professor at Utah State University who studies culturally responsive engineering and literacy-infused engineering with linguistically diverse students.

visit author page

biography

Angela Minichiello P.E. Utah State University Orcid 16x16 orcid.org/0000-0002-4545-9355

visit author page

Angela Minichiello is an assistant professor in the Department of Engineering Education at Utah State University (USU) and a registered professional mechanical engineer. Her research examines issues of access, diversity, and inclusivity in engineering education. In particular, she is interested in engineering identity, problem-solving, and the intersections of online learning and alternative pathways for adult, nontraditional, and veteran undergraduates in engineering.

visit author page

biography

Theresa Green Utah State University

visit author page

Theresa Green is a graduate student at Utah State University pursuing a PhD in Engineering Education. Her research interests include K-12 STEM integration and improving diversity and inclusion in engineering.

visit author page

Download Paper |

Abstract

In this theory paper, we set out to consider, as a matter of methodological interest, the use of quantitative measures of inter-coder reliability (e.g., percentage agreement, correlation, Cohen’s Kappa, etc.) as necessary and/or sufficient correlates for quality within qualitative research in engineering education. It is well known that the phrase qualitative research represents a diverse body of scholarship conducted across a range of epistemological viewpoints and methodologies. Given this diversity, we concur with those who state that it is ill advised to propose recipes or stipulate requirements for achieving qualitative research validity and reliability. Yet, as qualitative researchers ourselves, we repeatedly find the need to communicate the validity and reliability—or quality—of our work to different stakeholders, including funding agencies and the public. One method for demonstrating quality, which is increasingly used in qualitative research in engineering education, is the practice of reporting quantitative measures of agreement between two or more people who code the same qualitative dataset. In this theory paper, we address this common practice in two ways. First, we identify instances in which inter-coder reliability measures may not be appropriate or adequate for establishing quality in qualitative research. We query research that suggests that the numerical measure itself is the goal of qualitative analysis, rather than the depth and texture of the interpretations that are revealed. Second, we identify complexities or methodological questions that may arise during the process of establishing inter-coder reliability, which are not often addressed in empirical publications. To achieve this purposes, in this paper we will ground our work in a review of qualitative articles, published in the Journal of Engineering Education, that have employed inter-rater or inter-coder reliability as evidence of research validity. In our review, we will examine the disparate measures and scores (from 40% agreement to 97% agreement) used as evidence of quality, as well as the theoretical perspectives within which these measures have been employed. Then, using our own comparative case study research as an example, we will highlight the questions and the challenges that we faced as we worked to meet rigorous standards of evidence in our qualitative coding analysis, We will explain the processes we undertook and the challenges we faced as we assigned codes to a large qualitative data set approached from a post positivist perspective. We will situate these coding processes within the larger methodological literature and, in light of contrasting literature, we will describe the principled decisions we made while coding our own data. We will use this review of qualitative research and our own qualitative research experiences to elucidate inconsistencies and unarticulated issues related to evidence for qualitative validity as a means to generate further discussion regarding quality in qualitative coding processes.

Wilson-Lopez, A., & Minichiello, A., & Green, T. (2019, June), An Inquiry Into the Use of Intercoder Reliability Measures in Qualitative Research Paper presented at 2019 ASEE Annual Conference & Exposition , Tampa, Florida. https://peer.asee.org/32067

ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2019 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015