Interrater Reliability of Standardized Actors Versus Nonactors in a Simulation Based Assessment of Interprofessional Collaboration

被引:10
作者
Dickter, David N. [1 ]
Stielstra, Sorrel [1 ]
Lineberry, Matthew [2 ]
机构
[1] Western Univ Hlth Sci, Dept Educ, Pomona, CA USA
[2] Univ Illinois, Dept Med Educ, Chicago, IL USA
来源
SIMULATION IN HEALTHCARE-JOURNAL OF THE SOCIETY FOR SIMULATION IN HEALTHCARE | 2015年 / 10卷 / 04期
关键词
COMMUNICATION-SKILLS;
D O I
10.1097/SIH.0000000000000094
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Introduction There is a need for reliable and practical interprofessional simulations that measure collaborative practice in outpatient/community scenarios where most health care takes place. The authors applied generalizability theory to examine reliability in an ambulatory care scenario using the following 2 trained observer groups: standardized patient (SP, actor) raters and those who received rater training alone (non-SPs). Methods Twenty-one graduate health professions students participated as health care providers in an interprofessional care simulation involving an SP, caregiver, and clinicians. Six observers in each group received frame-of-reference training and rated aspects of collaborative care using a behavioral observation checklist. The authors examined sources of measurement variance using generalizability theory and extended this technique to statistically compare the rater types and compute reliability for subsets of raters. Results Standardized patient ratings were significantly more reliable than non-SPs' despite both groups receiving extensive rater training. A single SP was predicted to generate scores with a reliability of 0.74, whereas a single non-SP rater's scores were predicted at a reliability of 0.40. Removing each rater one by one from the full 6-member SP sample reduced reliability similarly for all raters (reliability, 0.86-0.89). However, removing individual raters from the full 6-member non-SP sample led to more variable reductions in reliability (0.58-0.72). Conclusions Ongoing experience rating performance from within a particular simulation-based assessment may be a valuable rater characteristic and more effective than rater training alone. The extensions of reliability estimation introduced here can also be used to support more insightful reliability research and subsequent improvement of rater training and assessment protocols.
引用
收藏
页码:249 / 255
页数:7
相关论文
共 25 条
[1]  
[Anonymous], ED19A LIAIS COMM MED
[2]  
[Anonymous], 2010, FRAM ACT INT ED COLL
[3]  
[Anonymous], MEASURING ANAL BEHAV
[4]  
[Anonymous], MULT COMP CAR OLD AD
[5]   Interprofessional Education: A Review and Analysis of Programs From Three Academic Health Centers [J].
Aston, Sheree J. ;
Rheault, Wendy ;
Arenson, Christine ;
Tappert, Susan K. ;
Stoecker, Judith ;
Orzoff, Jordan ;
Galitski, Hayes ;
Mackintosh, Susan .
ACADEMIC MEDICINE, 2012, 87 (07) :949-955
[6]   Assessing clinical communication skills in physicians: are the skills context specific or generalizable [J].
Baig, Lubna A. ;
Violato, Claudio ;
Crutcher, Rodney A. .
BMC MEDICAL EDUCATION, 2009, 9
[7]  
Boulet J.R., 2005, ENCY STAT BEHAV SCI, V2, P704
[8]  
Brennan R.L., 2010, Generalizability theory
[9]   Crowd-Sourced Assessment of Technical Skills: a novel method to evaluate surgical performance [J].
Chen, Carolyn ;
White, Lee ;
Kowalewski, Timothy ;
Aggarwal, Rajesh ;
Lintott, Chris ;
Comstock, Bryan ;
Kuksenok, Katie ;
Aragon, Cecilia ;
Holst, Daniel ;
Lendvay, Thomas .
JOURNAL OF SURGICAL RESEARCH, 2014, 187 (01) :65-71
[10]  
Dickter DN., 2013, Med Sci Educ, V23, P554, DOI [10.1007/BF03341676, DOI 10.1007/BF03341676]