Comprehensive reliability assessment and comparison of quality indicators and their components

被引:48
作者
Huff, ED [1 ]
机构
[1] Hlth Care Qual Anal, Amherst, NH 03031 USA
关键词
reliability; quality indicator; quality of measurement; inter-rater agreement; kappa;
D O I
10.1016/S0895-4356(97)00218-7
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
To test whether conventional data reliability assessment overestimates reliability, an assessment and a comparison of the reliability of complex quality indicators and their simpler components were conducted. Medical records of 1078 Medicare cases with principal diagnoses of initial episodes of acute myocardial infarction (AMI) were independently reabstracted at two national Clinical Data Abstraction Centers (CDACs). The interrater agreement beyond chance (kappa) of reabstracted and original quality indicators and key components were computed and compared. Results showed excellent agreement (kappas ranging from 0.88 to 0.95) for simple determinations of whether standard medical therapies were provided. Repeatability of eligibility status and the more complex determinations of whether "ideal" candidates were not treated showed moderate to excellent kappa values ranging from 0.41 to 0.79. A planned comparison of five similar quality indicators and their key components showed that the simpler treatment components, as a group, had significantly higher kappas than the more complexly derived eligibility components and composite indicators (Fisher's exact, p < 0.02). Reliability assessment of quality indicators should be based upon the repeatability of the whole indicator, accounting for both data and logic, and not just one simple element. (C) 1997 Elsevier Science Inc.
引用
收藏
页码:1395 / 1404
页数:10
相关论文
共 28 条
  • [1] Angell M, 1997, NEW ENGL J MED, V336, P808
  • [2] Quality of health care .2. Measuring quality of care
    Brook, RH
    McGlynn, EA
    Cleary, PD
    [J]. NEW ENGLAND JOURNAL OF MEDICINE, 1996, 335 (13) : 966 - 970
  • [3] HIGH AGREEMENT BUT LOW KAPPA .2. RESOLVING THE PARADOXES
    CICCHETTI, DV
    FEINSTEIN, AR
    [J]. JOURNAL OF CLINICAL EPIDEMIOLOGY, 1990, 43 (06) : 551 - 558
  • [4] A COEFFICIENT OF AGREEMENT FOR NOMINAL SCALES
    COHEN, J
    [J]. EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 1960, 20 (01) : 37 - 46
  • [5] DESIMONIAN R, 1992, MED USES STAT, P333
  • [6] QUALITY OF CARE FOR MEDICARE PATIENTS WITH ACUTE MYOCARDIAL-INFARCTION - A 4-STATE PILOT-STUDY FROM THE COOPERATIVE CARDIOVASCULAR PROJECT
    ELLERBECK, EF
    JENCKS, SF
    RADFORD, MJ
    KRESOWIK, TF
    CRAIG, AS
    GOLD, JA
    KRUMHOLZ, HM
    VOGEL, RA
    [J]. JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 1995, 273 (19): : 1509 - 1514
  • [7] HIGH AGREEMENT BUT LOW KAPPA .1. THE PROBLEMS OF 2 PARADOXES
    FEINSTEIN, AR
    CICCHETTI, DV
    [J]. JOURNAL OF CLINICAL EPIDEMIOLOGY, 1990, 43 (06) : 543 - 549
  • [8] Fleiss JL., 1973, STAT METHODS RATES P
  • [9] VARIATION IN THE USE OF CARDIAC PROCEDURES AFTER ACUTE MYOCARDIAL-INFARCTION
    GUADAGNOLI, E
    HAUPTMAN, PJ
    AYANIAN, JZ
    PASHOS, CL
    MCNEIL, BJ
    CLEARY, PD
    [J]. NEW ENGLAND JOURNAL OF MEDICINE, 1995, 333 (09) : 573 - 578
  • [10] The meaning of kappa: Probabilistic concepts of reliability and validity revisited
    GuggenmoosHolzmann, I
    [J]. JOURNAL OF CLINICAL EPIDEMIOLOGY, 1996, 49 (07) : 775 - 782