Intercoder Reliability for Use in Qualitative Research and Evaluation

被引:10
作者
Coleman, Monica L. [1 ,4 ]
Ragan, Moira [2 ]
Dari, Tahani [3 ]
机构
[1] Univ Mississippi, Oxford, MS USA
[2] Ripple Effect, Rockville, MD USA
[3] Univ Detroit Mercy, Detroit, MI USA
[4] Univ Mississippi, Dept Leadership & Counselor Educ, Oxford, MS 38677 USA
关键词
Intercoder reliability; qualitative analysis; positivist qualitative research; community-based participatory research; participatory evaluation; AGREEMENT;
D O I
10.1080/07481756.2024.2303715
中图分类号
G44 [教育心理学];
学科分类号
0402 ; 040202 ;
摘要
Intercoder reliability can increase trustworthiness, accuracy, rigor, collaboration, and power sharing in qualitative research. Though not every qualitative design can utilize intercoder reliability, this article highlights how positivist qualitative research, community-based participatory research, and participatory evaluation all strengthen when intercoder reliability is sought among coding teams. Basic calculations for intercoder reliability can be completed manually or with software, making it an accessible metric to inform counseling research and evaluation, benefitting counselor education and practice. Interpreting metrics like percent agreement or kappa statistic largely depends on data context and potential impact of the study, which warrants researchers and evaluators to assess and understand the consequences of their work. Using ICR with clinical data such as interviews and other qualitative sources can be utilized to teach trainees the method in counseling research and evaluation and help make visible the important contributions of qualitative research using collaborative teams in counselor education. SIGNIFICANCE STATEMENT Intercoder reliability is a method in qualitative coding analysis that helps multiple coders increase accuracy in their results based on agreement in their coding. The method is particularly important in qualitative research and evaluation like positivist qualitative research, community-based participatory research, and participatory evaluation, where rigor, power sharing, and participation, respectively, are valued among the research teams.
引用
收藏
页码:136 / 146
页数:11
相关论文
共 25 条
[1]  
American Counseling Association, 2014, ACA Code of Ethics
[2]  
[Anonymous], 2012, IBM SPSS Statistics for Windows
[3]   Processes and Procedures for Estimating Score Reliability and Precision [J].
Bardhoshi, Gerta ;
Erford, Bradley T. .
MEASUREMENT AND EVALUATION IN COUNSELING AND DEVELOPMENT, 2017, 50 (04) :256-263
[4]   Beyond qualitative/quantitative structuralism: the positivist qualitative research and the paradigmatic disclaimer [J].
Berkovich, Izhak .
QUALITY & QUANTITY, 2018, 52 (05) :2063-2077
[5]   The use of intercoder reliability in qualitative interview data analysis in science education [J].
Cheung, Kason Ka Ching ;
Tai, Kevin W. H. .
RESEARCH IN SCIENCE & TECHNOLOGICAL EDUCATION, 2023, 41 (03) :1155-1175
[6]   The Case for Participatory Evaluation in an Era of Accountability [J].
Chouinard, Jill Anne .
AMERICAN JOURNAL OF EVALUATION, 2013, 34 (02) :237-253
[7]   A COEFFICIENT OF AGREEMENT FOR NOMINAL SCALES [J].
COHEN, J .
EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 1960, 20 (01) :37-46
[8]  
Coleman ML, 2022, Counseling Outcome Research and Evaluation, V13, P22, DOI [10.1080/21501378.2022.2025771, https://doi.org/10.1080/21501378.2022.2025771, DOI 10.1080/21501378.2022.2025771, 10.1080/21501378.2022.2025771]
[9]  
Creswell J.W., 2019, Research design: Qualitative, quantitative, and mixed methods approaches
[10]  
Dari T, 2019, The Professional Counselor, V9, P1, DOI [10.15241/td.9.1.1, 10.15241/td.9.1.1, DOI 10.15241/TD.9.1.1]