June 24, 2017
June 24, 2017
June 28, 2017
When using qualitative coding techniques, establishing inter-rater reliability (IRR) is a recognized process of determining the trustworthiness of the study. However, the process of manually determining IRR is not always clear, especially if specialized qualitative coding software that calculates the reliability automatically is not being used. Methods of coding without software vary greatly and include using spreadsheet software, word processing software, or even hard copies with different colored highlighters. This leads to a variety of methods for calculating IRR. This paper summarizes one approach to establishing IRR for studies where common word processing software is used. The authors provide recommendations, or “tricks of the trade” for researchers performing qualitative coding who may be seeking ideas about how to calculate IRR without specialized software.
The process discussed in this paper uses Microsoft Word® (Word) and Excel® (Excel). First, the interview transcripts were coded in Word, and codes were inserted in the appropriate locations as comments in the document. A macro (a customizable function that combines many commands into a single process) was then used to extract these comments to a table in a separate document. The table was then moved into Excel to enable comparison of codes between individual coders. We compared codes and phrases to determine coder agreement for each participant and then calculated IRR. IRR was calculated as the proportion of agreed codes over the total number of codes in the document. We calculated overall IRR (between all three coders) as well as IRR between each set of coders.
Our coding and IRR methods were employed on a dataset from a survey that was taken by undergraduate students at five different universities (n=154). In this study, participants’ responses to open-ended survey questions were coded by three researchers using inductive, open coding. A total of 64 codes were developed through an initial pass through the data, then three coders analyzed the remaining responses independently. Codes for the three researchers were compared using our IRR method described above. Through this process, three coders were able to consistently get 80-90% IRR on 95% of the codes.
Using this process could accelerate or standardize IRR practices in qualitative studies. This paper discusses “tricks of the trade” that were used in the implementation of this method so other researchers can employ a similar approach in their work. For example, coding the context as well as the exact word or phrase that jumps out is key in comparing codes. This trick, along with others, will be expanded upon in the full version of this paper.
McAlister, A. M., & Lee, D. M., & Ehlert, K. M., & Kajfez, R. L., & Faber, C. J., & Kennedy, M. S. (2017, June), Qualitative Coding: An Approach to Assess Inter-Rater Reliability Paper presented at 2017 ASEE Annual Conference & Exposition, Columbus, Ohio. https://peer.asee.org/28777
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2017 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015