An Investigation of the Sources of Measurement Error in the Post-Encounter Written Scores from Standardized Patient Examinations

被引:18
作者
Boulet, Jack R. [1 ]
Ben-David, Miriam Friedman [1 ]
Hambleton, Ronald K. [2 ]
Burdick, William [1 ]
Ziv, Amitai [1 ]
Gary, Nancy E. [1 ]
机构
[1] Educ Commiss Foreign Med Grad, Philadelphia, PA 19104 USA
[2] Univ Massachusetts, Amherst, MA 01003 USA
关键词
clinical skills assessment; generalizability; reliability; scoring;
D O I
10.1023/A:1009712810810
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Purpose. Post-encounter written exercises (e.g., patient notes) have been included in clinical skills assessments that use standardized patients. The purpose of this study was to estimate the generalizability of the scores from these written exercises when they are rated by various trained health professionals, including physicians. Method. The patient notes from a 10 station clinical skills examination involving 10 first year emergency medicine residents were analytically scored by four rater groups: three physicians, three nurses, three fourth year medical students, three billing clerks. Generalizability analyses were used to partition the various sources of error variance and derive reliability-like coefficients for each group of raters. Results. The generalizability analyses indicated that case-to-case variability was a major source of error variance in the patient note scores. The variance attributable to the rater or to the rater by examinee interaction was negligible. This finding was consistent across the four rater groups. Generalizability coefficients in excess of 0.80 were achieved for each of the four sets of raters. Physicians did, however, produce the most dependable scores. Conclusion. There is little advantage, from a reliability perspective, in using more than one trained physician, or other health professional who is adequately trained to score the patient note. Measurement error is introduced primarily by case sampling variability. This suggests that, if required, increases in the generalizability of the patient note scores can be made through the addition of cases, and not the addition of raters.
引用
收藏
页码:89 / 100
页数:12
相关论文
共 14 条
[1]  
[Anonymous], 1995, EDUC RES-UK, DOI [10.3102/0013189X024005005, DOI 10.3102/0013189X024005005, DOI 10.3102/2F0013189X024005005]
[2]  
Brennan R.L., 1995, ED MEASUREMENT I WIN, P9
[3]   GENERALIZABILITY ANALYSES OF WORK KEYS LISTENING AND WRITING TESTS [J].
BRENNAN, RL ;
GAO, XH ;
COLTON, DA .
EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 1995, 55 (02) :157-176
[4]   RELIABILITY AND EFFICIENCY OF COMPONENTS OF CLINICAL COMPETENCE ASSESSED WITH 5 PERFORMANCE-BASED EXAMINATIONS USING STANDARDIZED PATIENTS [J].
COLLIVER, JA ;
VU, NV ;
MARKWELL, SJ ;
VERHULST, SJ .
MEDICAL EDUCATION, 1991, 25 (04) :303-310
[5]   ASSESSMENT OF UNIQUENESS OF INFORMATION PROVIDED BY POSTENCOUNTER WRITTEN SCORES ON STANDARDIZED-PATIENT EXAMINATIONS [J].
COLLIVER, JA ;
TRAVIS, TA ;
ROBBS, RS ;
VU, NV ;
MARCY, ML ;
BARROWS, HS .
EVALUATION & THE HEALTH PROFESSIONS, 1992, 15 (04) :465-474
[7]   Generalizability and validity of a mathematics performance assessment [J].
Lane, S ;
Liu, M ;
Ankenmann, RD ;
Stone, CA .
JOURNAL OF EDUCATIONAL MEASUREMENT, 1996, 33 (01) :71-92
[8]   Who should rate candidates in an objective structured clinical examination? [J].
Martin, JA ;
Reznick, RK ;
Rothman, A ;
Tamblyn, RM ;
Regehr, G .
ACADEMIC MEDICINE, 1996, 71 (02) :170-175
[9]  
SAS Institute Inc, 1990, SAS/STAT User's Guide, Vfourth
[10]   THE USE OF A PATIENT NOTE TO EVALUATE CLINICAL SKILLS OF 1ST-YEAR RESIDENTS WHO ARE GRADUATES OF FOREIGN MEDICAL-SCHOOLS [J].
STILLMAN, PL ;
REGAN, MB ;
HALEY, HLA ;
NORCINI, JJ ;
FRIEDMAN, M ;
SUTNICK, AI .
ACADEMIC MEDICINE, 1992, 67 (10) :S57-S59