Explainable Pre-Trained Language Models for Sentiment Analysis in Low-Resourced Languages

被引:0
作者
Mabokela, Koena Ronny [1 ,4 ]
Primus, Mpho [2 ]
Celik, Turgay [3 ]
机构
[1] Univ Johannesburg, Bunting Rd Campus, ZA-2092 Auckland Pk, South Africa
[2] Univ Johannesburg, Dept Zool, Kingsway Campus, Auckland Pk, South Africa
[3] Univ Agder, Ctr Artificial Intelligence Res CAIR, Dept ICT, Grimstad, N-4879, Norway
[4] Univ Witwatersrand, Sch Elect & Informat Engn, ZA-2000 Johannesburg, South Africa
基金
新加坡国家研究基金会;
关键词
explainable AI; sentiment analysis; African languages; Afrocentric models; pre-trained models; transformer models; CLASSIFICATION;
D O I
10.3390/bdcc8110160
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sentiment analysis is a crucial tool for measuring public opinion and understanding human communication across digital social media platforms. However, due to linguistic complexities and limited data or computational resources, it is under-represented in many African languages. While state-of-the-art Afrocentric pre-trained language models (PLMs) have been developed for various natural language processing (NLP) tasks, their applications in eXplainable Artificial Intelligence (XAI) remain largely unexplored. In this study, we propose a novel approach that combines Afrocentric PLMs with XAI techniques for sentiment analysis. We demonstrate the effectiveness of incorporating attention mechanisms and visualization techniques in improving the transparency, trustworthiness, and decision-making capabilities of transformer-based models when making sentiment predictions. To validate our approach, we employ the SAfriSenti corpus, a multilingual sentiment dataset for South African under-resourced languages, and perform a series of sentiment analysis experiments. These experiments enable comprehensive evaluations, comparing the performance of Afrocentric models against mainstream PLMs. Our results show that the Afro-XLMR model outperforms all other models, achieving an average F1-score of 71.04% across five tested languages, and the lowest error rate among the evaluated models. Additionally, we enhance the interpretability and explainability of the Afro-XLMR model using Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP). These XAI techniques ensure that sentiment predictions are not only accurate and interpretable but also understandable, fostering trust and reliability in AI-driven NLP technologies, particularly in the context of African languages.
引用
收藏
页数:25
相关论文
共 50 条
  • [41] Ronny M., 2022, P 1 ANN M ELRA ISCA, P70
  • [42] Ronny M.K., 2023, P 4 WORKSH RES AFR I, P115
  • [43] Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities
    Saeed, Waddah
    Omlin, Christian
    [J]. KNOWLEDGE-BASED SYSTEMS, 2023, 263
  • [44] Explaining deep neural networks: A survey on the global interpretation methods
    Saleem, Rabia
    Yuan, Bo
    Kurugollu, Fatih
    Anjum, Ashiq
    Liu, Lu
    [J]. NEUROCOMPUTING, 2022, 513 : 165 - 180
  • [45] Human-centered XAI: Developing design patterns for explanations of clinical decision support systems
    Schoonderwoerd, Tjeerd A. J.
    Jorritsma, Wiard
    Neerincx, Mark A.
    van den Bosch, Karel
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2021, 154
  • [46] Sharma H.D., 2023, Eng. Proc, V59, DOI [10.3390/engproc2023059068, DOI 10.3390/ENGPROC2023059068]
  • [47] Suri V., 2021, P 2021 2 INT C EL SU, P1
  • [48] Vig J, 2019, PROCEEDINGS OF THE 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: SYSTEM DEMONSTRATIONS, (ACL 2019), P37
  • [49] The role of artificial intelligence in achieving the Sustainable Development Goals
    Vinuesa, Ricardo
    Azizpour, Hossein
    Leite, Iolanda
    Balaam, Madeline
    Dignum, Virginia
    Domisch, Sami
    Fellander, Anna
    Daniela Langhans, Simone
    Tegmark, Max
    Nerini, Francesco Fuso
    [J]. NATURE COMMUNICATIONS, 2020, 11 (01)
  • [50] A survey on sentiment analysis methods, applications, and challenges
    Wankhade, Mayur
    Rao, Annavarapu Chandra Sekhara
    Kulkarni, Chaitanya
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (07) : 5731 - 5780