Portfolios for assessment of paediatric specialist registrars

被引:22
作者
Melville, C
Rees, M
Brookfield, D
Anderson, J
机构
[1] Univ Keele, Dept Paediat, Keele ST5 5BG, Staffs, England
[2] Royal Shrewsbury Hosp, Dept Paediat, Shrewsbury, Salop, England
[3] Royal Hosp Wolverhampton, Dept Paediat, Wolverhampton WV2 1BT, England
关键词
education; medical; undergraduate; methods; educational measurement; clinical competence; standards; medical staff; hospitals; paediatrics;
D O I
10.1111/j.1365-2929.2004.01961.x
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
INTRODUCTION In 1997 the Royal College of Paediatrics and Child Health introduced portfolios to guide and monitor the learning of specialist registrars. We studied their value for assessment. METHODS Using Bigg's SOLO criteria we devised a marking scheme based on 6 domains of competence: clinical, communication, teaching and learning, ethics and attitudes, management and evaluation, and creation of evidence. We rated portfolios according to quality of evidence presented and expectations by year of training. We similarly assessed trainee performance in the annual record of in-training assessment (RITA) interview. Specific advice based on the results of the first portfolio assessments was circulated to all trainees, instructing them to increase the structure and decrease the bulk of portfolios. A second sample of portfolios was reviewed a year later, using similar evaluations, to determine the effects. RESULTS A total of 76 portfolios were assessed in year 1 by a single rater; 30 portfolios were assessed in year 2 by 2 independent raters. The quality of documentation improved from year 1 to year 2 but there was no significant increase in portfolio scores. The inter-rater correlation coefficient of the portfolio assessment method was 0.52 (Cohen's kappa 0.35). The inter-rater correlation coefficient of the RITA interview was 0.71 (Cohen's kappa 0.38). There was moderate inter-assessment correlation between portfolios and RITA interviews (kappa 0.26 in year 1 and 0.29 in year 2). Generalisability analysis suggested that 5 successive ratings by a single observer or independent ratings by 4 observers on the same occasion would be needed to yield a generalisability coefficient > 0.8 for overall portfolio rating. CONCLUSIONS This method of portfolio assessment is insufficiently reliable as a sole method for high stakes, single-instance assessment, but has a place as part of a triangulation process. Repeated portfolio assessment by paired observers would increase reliability. Longer term studies are required to establish whether portfolio assessment positively influences learner behaviour.
引用
收藏
页码:1117 / 1125
页数:9
相关论文
共 21 条
[1]  
Biggs J.B., 2007, Teaching for quality learning at university, V3rd
[2]  
BOYER E, 1990, ENLARGING PERSPECTIV, P15
[3]   Generalisability: a key to unlock professional assessment [J].
Crossley, J ;
Davies, H ;
Humphris, G ;
Jolly, B .
MEDICAL EDUCATION, 2002, 36 (10) :972-978
[4]   Cognitive and learning styles in medical education [J].
Curry, L .
ACADEMIC MEDICINE, 1999, 74 (04) :409-413
[5]   Portfolio assessment in medical students' final examinations [J].
Davis, MH ;
Ben-David, MF ;
Harden, RM ;
Ker, PHJ ;
McGhee, C ;
Pippard, MJ ;
Snadden, D .
MEDICAL TEACHER, 2001, 23 (04) :357-366
[6]   FEEDBACK IN CLINICAL MEDICAL-EDUCATION [J].
ENDE, J .
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 1983, 250 (06) :777-781
[7]  
GLASSICK CE, 1997, SCHOLARSHIP ASSESSED, P5
[8]  
KHERA N, SPECIALIST REGISTRAR
[9]  
Koretz D., 1998, ASSESS EDUC, V5, P309, DOI DOI 10.1080/0969595980050302
[10]   THE SELECTION AND TRAINING OF EXAMINERS FOR CLINICAL EXAMINATIONS [J].
NEWBLE, DI ;
HOARE, J ;
SHELDRAKE, PF .
MEDICAL EDUCATION, 1980, 14 (05) :345-349