Leveraging Narrative Feedback in Programmatic Assessment: The Potential of Automated Text Analysis to Support Coaching and Decision-Making in Programmatic Assessment

被引:0
作者
Nair, Balakrishnan R. [1 ]
Loon, Joyce M. W. Moonen-van [2 ]
van Lierop, Marion [3 ]
Govaerts, Marjan [2 ]
机构
[1] Univ Newcastle, Ctr Med Profess Dev, Newcastle, Australia
[2] Maastricht Univ, Sch Hlth Profess Educ, Fac Hlth Med & Life Sci, Maastricht, Netherlands
[3] Maastricht Univ, Fac Hlth Med & Life Sci, Dept Family Med, Maastricht, Netherlands
关键词
programmatic assessment; narrative feedback; learning analytics; text mining; international medical graduates; PERFORMANCE;
D O I
10.2147/AMEP.S465259
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Introduction: Current assessment approaches increasingly use narratives to support learning, coaching and high-stakes decisionmaking. Interpretation of narratives, however, can be challenging and time-consuming, potentially resulting in suboptimal or inadequate use of assessment data. Support for learners, coaches as well as decision-makers in the use and interpretation of these narratives therefore seems essential. Methods: We explored the utility of automated text analysis techniques to support interpretation of narrative assessment data, collected across 926 clinical assessments of 80 trainees, in an International Medical Graduates' licensing program in Australia. We employed topic modelling and sentiment analysis techniques to automatically identify predominant feedback themes as well as the sentiment polarity of feedback messages. We furthermore sought to examine the associations between feedback polarity, numerical performance scores, and overall judgments about task performance. Results: Topic modelling yielded three distinctive feedback themes: Medical Skills, Knowledge, and Communication & Professionalism. The volume of feedback varied across topics and clinical settings, but assessors used more words when providing feedback to trainees who did not meet competence standards. Although sentiment polarity and performance scores did not seem to correlate at the level of single assessments, findings showed a strong positive correlation between the average performance scores and the average algorithmically assigned sentiment polarity. Discussion: This study shows that use of automated text analysis techniques can pave the way for a more efficient, structured, and meaningful learning, coaching, and assessment experience for learners, coaches and decision-makers alike. When used appropriately, these techniques may facilitate more meaningful and in-depth conversations about assessment data, by supporting stakeholders in interpretation of large amounts of feedback. Future research is vital to fully unlock the potential of automated text analysis, to support meaningful integration into educational practices.
引用
收藏
页码:671 / 683
页数:13
相关论文
共 29 条
[1]  
Baartman LKJ, 2024, Perspectives, V28, P57
[2]   Numerical versus narrative: A comparison between methods to measure medical student performance during clinical clerkships [J].
Bartels, Josef ;
Mooney, Christopher John ;
Stone, Robert Thompson .
MEDICAL TEACHER, 2017, 39 (11) :1154-1158
[3]   Latent Dirichlet allocation [J].
Blei, DM ;
Ng, AY ;
Jordan, MI .
JOURNAL OF MACHINE LEARNING RESEARCH, 2003, 3 (4-5) :993-1022
[4]  
Bo Pang, 2008, Foundations and Trends in Information Retrieval, V2, P1, DOI 10.1561/1500000001
[5]   Beyond competencies: Field instructors' descriptions of student performance [J].
Bogo, Marion ;
Hughes, Judy ;
Regehr, Cheryl ;
Power, Roxanne ;
Woodford, Michael ;
Regehr, Glenn .
JOURNAL OF SOCIAL WORK EDUCATION, 2006, 42 (03) :579-593
[6]  
Boyd-Graber J, 2017, FOUND TRENDS INF RET, V11, P144
[7]  
Cohen G.S., 1993, Teaching and Learning in Medicine, V5, P10, DOI DOI 10.1080/10401339309539580
[8]   Topic modeling and sentiment analysis of global climate change tweets [J].
Dahal, Biraj ;
Kumar, Sathish A. P. ;
Li, Zhenlong .
SOCIAL NETWORK ANALYSIS AND MINING, 2019, 9 (01)
[9]   Scylla or Charybdis? Can we navigate between objectification and judgement in assessment? [J].
Eva, Kevin W. ;
Hodges, Brian D. .
MEDICAL EDUCATION, 2012, 46 (09) :914-919
[10]   Supervised Machine Learning Algorithms Can Classify Open-Text Feedback of Doctor Performance With Human-Level Accuracy [J].
Gibbons, Chris ;
Richards, Suzanne ;
Valderas, Jose Maria ;
Campbell, John .
JOURNAL OF MEDICAL INTERNET RESEARCH, 2017, 19 (03)