Interrater agreement of two adverse drug reaction causality assessment methods: A randomised comparison of the Liverpool Adverse Drug Reaction Causality Assessment Tool and the World Health Organization-Uppsala Monitoring Centre system

被引:24
作者
Mouton, Johannes P. [1 ]
Mehta, Ushma [1 ]
Rossiter, Dawn P. [1 ]
Maartens, Gary [1 ]
Cohen, Karen [1 ]
机构
[1] Univ Cape Town, Div Clin Pharmacol, Dept Med, Cape Town, South Africa
关键词
DISAGREEMENT; KAPPA;
D O I
10.1371/journal.pone.0172830
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Introduction A new method to assess causality of suspected adverse drug reactions, the Liverpool Adverse Drug Reaction Causality Assessment Tool (LCAT), showed high interrater agreement when used by its developers. Our aim was to compare the interrater agreement achieved by LCAT to that achieved by another causality assessment method, the World Health Organization-Uppsala Monitoring Centre system for standardised case causality assessment (WHO-UMC system), in our setting. Methods Four raters independently assessed adverse drug reaction causality of 48 drug-event pairs, identified during a hospital-based survey. A randomised design ensured that no washout period was required between assessments with the two methods. We compared the methods' interrater agreement by calculating agreement proportions, kappa statistics, and the intraclass correlation coefficient. We identified potentially problematic questions in the LCAT by comparing raters' responses to individual questions. Results Overall unweighted kappa was 0.61 (95% CI 0.43 to 0.80) on the WHO-UMC system and 0.27 (95% CI 0.074 to 0.46) on the LCAT. Pairwise unweighted Cohen kappa ranged from 0.33 to 1.0 on the WHO-UMC system and from 0.094 to 0.71 on the LCAT. The intraclass correlation coefficient was 0.86 (95% CI 0.74 to 0.92) on the WHO-UMC system and 0.61 (95% CI 0.39 to 0.77) on the LCAT. Two LCAT questions were identified as significant points of disagreement. Discussion We were unable to replicate the high interrater agreement achieved by the LCAT developers and instead found its interrater agreement to be lower than that achieved when using the WHO-UMC system. We identified potential reasons for this and recommend priority areas for improving the LCAT.
引用
收藏
页数:13
相关论文
共 27 条
[1]  
Abraira V., 1999, Qestii, V23, P561
[2]   Methods for causality assessment of adverse drug reactions - A systematic review [J].
Agbabiaka, Taofikat B. ;
Savovic, Jelena ;
Ernst, Edzard .
DRUG SAFETY, 2008, 31 (01) :21-37
[3]  
[Anonymous], 1986, DRUG INF J
[4]   Clarification of terminology in drug safety [J].
Aronson, JK ;
Ferner, RE .
DRUG SAFETY, 2005, 28 (10) :851-870
[5]   HIGH AGREEMENT BUT LOW KAPPA .2. RESOLVING THE PARADOXES [J].
CICCHETTI, DV ;
FEINSTEIN, AR .
JOURNAL OF CLINICAL EPIDEMIOLOGY, 1990, 43 (06) :551-558
[6]  
Cicchetti DV., 1994, PSYCHOL ASSESSMENTS, V6, P284, DOI [10.1037/1040-3590.6.4.284, DOI 10.1037/1040-3590.6.4.284]
[7]   A pilot randomised controlled trial to assess the utility of an e-learning package that trains users in adverse drug reaction causality [J].
Conroy, Elizabeth J. ;
Kirkham, Jamie J. ;
Bellis, Jennifer R. ;
Peak, Matthew ;
Smyth, Rosalind L. ;
Williamson, Paula R. ;
Pirmohamed, Munir .
INTERNATIONAL JOURNAL OF PHARMACY PRACTICE, 2015, 23 (06) :447-455
[8]  
DAVIES EC, 2011, PHARM MED, V25, P17
[9]   Clinicians are right not to like Cohen's κ [J].
de Vet, Henrica C. W. ;
Mokkink, Lidwine B. ;
Terwee, Caroline B. ;
Hoekstra, Otto S. ;
Knol, Dirk L. .
BMJ-BRITISH MEDICAL JOURNAL, 2013, 346
[10]   Development and Inter-Rater Reliability of the Liverpool Adverse Drug Reaction Causality Assessment Tool [J].
Gallagher, Ruairi M. ;
Kirkham, Jamie J. ;
Mason, Jennifer R. ;
Bird, Kim A. ;
Williamson, Paula R. ;
Nunn, Anthony J. ;
Turner, Mark A. ;
Smyth, Rosalind L. ;
Pirmohamed, Munir .
PLOS ONE, 2011, 6 (12)