Columbus, Ohio
June 24, 2017
June 24, 2017
June 28, 2017
Student Division Innovative Research Methods Technical Session
Student
9
10.18260/1-2--28777
https://peer.asee.org/28777
36876
Anne is an undergraduate student at The Ohio State University studying chemical engineering.
Dennis M. Lee is a doctoral student in the Engineering and Science Education Department and Graduate Research Assistant in the office of the Associate Dean for Undergraduate Studies in the College of Engineering, Computing, and Applied Sciences at Clemson University. He received his BA and MS in bacteriology from the University of Wisconsin, Madison. Prior to his studies at Clemson University, he taught introductory biology at Tri-County Technical College in Pendleton, SC. His research interests include the development of researcher identity and epistemic cognition in undergraduate STEM students.
Katherine M. Ehlert is a doctoral student in the Engineering and Science Education department in the College of Engineering, Computing, and Applied Sciences at Clemson University. She earned her BS in Mechanical Engineering from Case Western Reserve University and her MS in Mechanical Engineering focusing on Biomechanics from Cornell University. Prior to her enrollment at Clemson, Katherine worked as a Biomedical Engineering consultant in Philadelphia, PA. Her research interests include identity development through co and extra-curricular experiences for engineering students.
Dr. Rachel Louis Kajfez is an Assistant Professor in the Department of Engineering Education at The Ohio State University. She earned her B.S. and M.S. degrees in Civil Engineering from Ohio State and earned her Ph.D. in Engineering Education from Virginia Tech. Her research interests focus on the intersection between motivation and identity of undergraduate and graduate students, first-year engineering programs, mixed methods research, and innovative approaches to teaching.
Courtney is a Lecturer and Research Assistant Professor in the College of Engineering Honors Program at the University of Tennessee. She completed her Ph.D. in Engineering & Science Education at Clemson University. Prior to her Ph.D. work, she received her B.S. in Bioengineering at Clemson University and her M.S. in Biomedical Engineering at Cornell University. Courtney’s research interests include epistemic cognition in the context of problem solving, and researcher identity.
Marian Kennedy is an Associate Professor within the Department of Materials Science & Engineering at Clemson University. Her research group focused on the mechanical and tribological characterization of thin films. She also contributes to the engineering education community through research related to undergraduate research programs and navigational capital needed for graduate school.
When using qualitative coding techniques, establishing inter-rater reliability (IRR) is a recognized process of determining the trustworthiness of the study. However, the process of manually determining IRR is not always clear, especially if specialized qualitative coding software that calculates the reliability automatically is not being used. Methods of coding without software vary greatly and include using spreadsheet software, word processing software, or even hard copies with different colored highlighters. This leads to a variety of methods for calculating IRR. This paper summarizes one approach to establishing IRR for studies where common word processing software is used. The authors provide recommendations, or “tricks of the trade” for researchers performing qualitative coding who may be seeking ideas about how to calculate IRR without specialized software.
The process discussed in this paper uses Microsoft Word® (Word) and Excel® (Excel). First, the interview transcripts were coded in Word, and codes were inserted in the appropriate locations as comments in the document. A macro (a customizable function that combines many commands into a single process) was then used to extract these comments to a table in a separate document. The table was then moved into Excel to enable comparison of codes between individual coders. We compared codes and phrases to determine coder agreement for each participant and then calculated IRR. IRR was calculated as the proportion of agreed codes over the total number of codes in the document. We calculated overall IRR (between all three coders) as well as IRR between each set of coders.
Our coding and IRR methods were employed on a dataset from a survey that was taken by undergraduate students at five different universities (n=154). In this study, participants’ responses to open-ended survey questions were coded by three researchers using inductive, open coding. A total of 64 codes were developed through an initial pass through the data, then three coders analyzed the remaining responses independently. Codes for the three researchers were compared using our IRR method described above. Through this process, three coders were able to consistently get 80-90% IRR on 95% of the codes.
Using this process could accelerate or standardize IRR practices in qualitative studies. This paper discusses “tricks of the trade” that were used in the implementation of this method so other researchers can employ a similar approach in their work. For example, coding the context as well as the exact word or phrase that jumps out is key in comparing codes. This trick, along with others, will be expanded upon in the full version of this paper.
McAlister, A. M., & Lee, D. M., & Ehlert, K. M., & Kajfez, R. L., & Faber, C. J., & Kennedy, M. S. (2017, June), Qualitative Coding: An Approach to Assess Inter-Rater Reliability Paper presented at 2017 ASEE Annual Conference & Exposition, Columbus, Ohio. 10.18260/1-2--28777
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2017 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015