The validity of performance-based measures of clinical reasoning and alternative approaches

被引:34
作者
Kreiter, Clarence D. [1 ]
Bergus, George [1 ]
机构
[1] Univ Iowa, Coll Med, Dept Family Med, Iowa City, IA 52246 USA
关键词
D O I
10.1111/j.1365-2923.2008.03281.x
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
The development of a valid and reliable measure of clinical reasoning ability is a prerequisite to advancing our understanding of clinically relevant cognitive processes and to improving clinical education. A record of problem-solving performances within standardised and computerised patient simulations is often implicitly assumed to reflect clinical reasoning skills. However, the validity of this measurement method for assessing clinical reasoning is open to question. Explicitly defining the intended clinical reasoning construct should help researchers critically evaluate current performance score interpretations. Although case-specific measurement outcomes (i.e. low correlations between cases) have led medical educators to endorse performance-based assessments of problem solving as a method of measuring clinical reasoning, the matter of low across-case generalisation is a reliability issue with validity implications and does not necessarily support a performance-based approach. Given this, it is important to critically examine whether our current performance-based testing efforts are correctly focused. To design a valid educational assessment of clinical reasoning requires a coherent argument represented as a chain of inferences supporting a clinical reasoning interpretation. Suggestions are offered for assessing how well an examinee's existing knowledge organisation accommodates the integration of new patient information, and for focusing assessments on an examinee's understanding of how new patient information changes case-related probabilities and base rates.
引用
收藏
页码:320 / 325
页数:6
相关论文
共 25 条
[1]  
[Anonymous], 1993, Applied Measurement in Education, DOI DOI 10.1207/S15324818AME0602_1
[2]   The psychometric properties of five scoring methods applied to the script concordance test [J].
Bland, AC ;
Kreiter, CD ;
Gordon, JA .
ACADEMIC MEDICINE, 2005, 80 (04) :395-399
[3]   Standardized assessment of reasoning in contexts of uncertainty - The script concordance approach [J].
Charlin, B ;
Van Der Vleuten, C .
EVALUATION & THE HEALTH PROFESSIONS, 2004, 27 (03) :304-319
[4]   A comparison of the generalizability of scores produced by expert raters and automated scoring systems [J].
Clauser, BE ;
Swanson, DB ;
Clyman, SG .
APPLIED MEASUREMENT IN EDUCATION, 1999, 12 (03) :281-299
[5]   USING CLINICIAN RATINGS TO MODEL SCORE WEIGHTS FOR A COMPUTER-BASED CLINICAL-SIMULATION EXAMINATION [J].
CLAUSER, BE ;
SUBHIYAH, RG ;
PIEMME, TE ;
GREENBERG, L ;
CLYMAN, SG ;
RIPKEY, D ;
NUNGESTER, RJ .
ACADEMIC MEDICINE, 1993, 68 (10) :S64-S66
[6]   An examination of the contribution of computer-based case simulations to the USMLE Step 3 examination [J].
Clauser, BE ;
Margolis, MJ ;
Swanson, DB .
ACADEMIC MEDICINE, 2002, 77 (10) :S80-S82
[7]  
Clyman S., 1999, INNOVATIVE SIMULATIO, P29
[8]  
Crooks T.J., 1996, ASSESS EDUC, V3, P265, DOI [DOI 10.1080/0969594960030302, 10.1080/0969594960030302]
[9]   CONCEPT MAPS AND THE DEVELOPMENT OF CASES FOR PROBLEM-BASED LEARNING [J].
EDMONDSON, KM .
ACADEMIC MEDICINE, 1994, 69 (02) :108-110
[10]  
Elstein AS, 1978, Medical Problem Solving, P292