Comparing human text classification performance and explainability with large language and machine learning models using eye-tracking

被引:0
作者
Venkatesh, Jeevithashree Divya [1 ]
Jaiswal, Aparajita [2 ]
Nanda, Gaurav [1 ]
机构
[1] Purdue Univ, Sch Engn Technol, W Lafayette, IN 47907 USA
[2] Purdue Univ, Ctr Intercultural Learning Mentorship Assessment &, W Lafayette, IN 47907 USA
来源
SCIENTIFIC REPORTS | 2024年 / 14卷 / 01期
关键词
Human-AI alignment; Large language models; Explainable AI; Eye tracking; Cognitive engineering; Human-computer interaction; MOVEMENTS; GAZE;
D O I
10.1038/s41598-024-65080-7
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
To understand the alignment between reasonings of humans and artificial intelligence (AI) models, this empirical study compared the human text classification performance and explainability with a traditional machine learning (ML) model and large language model (LLM). A domain-specific noisy textual dataset of 204 injury narratives had to be classified into 6 cause-of-injury codes. The narratives varied in terms of complexity and ease of categorization based on the distinctive nature of cause-of-injury code. The user study involved 51 participants whose eye-tracking data was recorded while they performed the text classification task. While the ML model was trained on 120,000 pre-labelled injury narratives, LLM and humans did not receive any specialized training. The explainability of different approaches was compared based on the top words they used for making classification decision. These words were identified using eye-tracking for humans, explainable AI approach LIME for ML model, and prompts for LLM. The classification performance of ML model was observed to be relatively better than zero-shot LLM and non-expert humans, overall, and particularly for narratives with high complexity and difficult categorization. The top-3 predictive words used by ML and LLM for classification agreed with humans to a greater extent as compared to later predictive words.
引用
收藏
页数:12
相关论文
共 35 条
  • [31] Singh H., 2012, International Journal of Scientific and Research Publications, V2, P1, DOI DOI 10.1371/J0URNAL.P0NE.0029319
  • [32] Tokunaga Takenobu, 2017, P INT C RECENT ADV N, P758
  • [33] Törnberg P, 2023, Arxiv, DOI [arXiv:2304.06588, DOI 10.48550/ARXIV.2304.06588]
  • [34] What eye movements can tell us about sentence comprehension
    Vasishth, Shravan
    von der Malsburg, Titus
    Engelmann, Felix
    [J]. WILEY INTERDISCIPLINARY REVIEWS-COGNITIVE SCIENCE, 2013, 4 (02) : 125 - 134
  • [35] Studying Human Factors Aspects of Text Classification Task Using Eye Tracking
    Venkatesh, Jeevithashree Divya
    Jaiswal, Aparajita
    Suthar, Meet Tusharbhai
    Pradhan, Romila
    Nanda, Gaurav
    [J]. AUGMENTED COGNITION, AC 2023, 2023, 14019 : 89 - 107