Inter-rater reliability of the STEP protocol

被引:6
作者
Edman, Å [1 ]
Mahnfeldt, M
Wallin, A
机构
[1] Univ Gothenburg, Sahlgrenska Univ Hosp, Inst Clin Neurosci, Psychiat Sect, SE-43180 Molndal, Sweden
[2] Varberg Hosp, Dept Psychiat, Varberg, Sweden
关键词
D O I
10.1177/089198870101400307
中图分类号
R592 [老年病学]; C [社会科学总论];
学科分类号
03 ; 0303 ; 100203 ;
摘要
An inter-rater reliability test of the Stepwise Comparative Status Analysis (STEP) is presented. The STEP is a protocol for the clinical examination of patients with dementia, within the scope of a neuropsychiatric investigation. It combines psychiatric and neurologic bedside examination methods. The analysis is made in three steps where primary, observable symptom variables are successively aggregated via compound variables to the final determination of one of seven possible dominant regional brain syndromes (global, frontal, subcortical, parietal, frontosubcortical, frontoparietal, other), here also called complex variables. In the present study, two senior physicians assessed 50 patients independently and simultaneously. None of the patients was known to both physicians. In 42 patients (84%), the same dominant brain syndrome was determined by the two clinicians. The probability (P value) of this (or better) agreement was calculated at 2.0 x 10(-12). Kappa coefficients were calculated as a measure of assessment agreement regarding the 50 STEP variables. For 20 variables, the coefficient was 0.75 or above, indicating excellent agreement; for 22 variables, the coefficient was below 0.75 and above 0.40, indicating moderate agreement; and for 4 variables, the value was 0.40 or below, indicating poor agreement. Kappa calculations regarding the assessments of four variables were either not possible or were considered inappropriate.
引用
收藏
页码:140 / 144
页数:5
相关论文
共 50 条
  • [41] Diagnosing delusions: A review of inter-rater reliability
    Bell, Vaughan
    Halligan, Peter W.
    Ellis, Hadyn D.
    SCHIZOPHRENIA RESEARCH, 2006, 86 (1-3) : 76 - 79
  • [42] Inter-rater reliability of the Rome criteria in children
    Saps, M
    Di Lorenzo, C
    GASTROENTEROLOGY, 2004, 126 (04) : A378 - A378
  • [43] AN INTER-RATER RELIABILITY STUDY OF THE BARTHEL INDEX
    ROY, CW
    TOGNERI, J
    HAY, E
    PENTLAND, B
    INTERNATIONAL JOURNAL OF REHABILITATION RESEARCH, 1988, 11 (01) : 67 - 70
  • [44] SIMPLE DEVICE FOR IMPROVING INTER-RATER RELIABILITY
    MCQUEEN, WM
    BEHAVIOR THERAPY, 1975, 6 (01) : 128 - 129
  • [45] Inter-rater reliability of WHO 'disability' grading
    Brandsma, W
    Larsen, M
    Richard, C
    Ebenezer, M
    LEPROSY REVIEW, 2004, 75 (02) : 131 - 134
  • [46] Intra-rater and inter-rater reliability of robotic arthrometer DYNEELAX®
    Mihalinec, Katja
    Martinez-Cepa, Carmen B.
    Zuil-Escobar, Juan C.
    Kejzar, Natasa
    Vauhnik, Renata
    JOURNAL OF EXPERIMENTAL ORTHOPAEDICS, 2024, 11 (03)
  • [47] The Inter-Rater Reliability of Technical Skills Assessment and Retention of Rater Training
    Gawad, Nada
    Fowler, Amanda
    Mimeault, Richard
    Raiche, Isabelle
    JOURNAL OF SURGICAL EDUCATION, 2019, 76 (04) : 1088 - 1093
  • [48] Measuring Essay Assessment: Intra-rater and Inter-rater Reliability
    Kayapinar, Ulas
    EURASIAN JOURNAL OF EDUCATIONAL RESEARCH, 2014, (57): : 113 - 135
  • [49] Inter-rater reliability of a protocol for analysing recordings of uterine activity as measured by external tocography
    Reynolds, A.
    Halern, E.
    Ater, S.
    Hayes, B.
    BJOG-AN INTERNATIONAL JOURNAL OF OBSTETRICS AND GYNAECOLOGY, 2018, 125 : 30 - 30
  • [50] Inter-rater reliability of AMSTAR is dependent on the pair of reviewers
    Dawid Pieper
    Anja Jacobs
    Beate Weikert
    Alba Fishta
    Uta Wegewitz
    BMC Medical Research Methodology, 17