Explainable Pre-Trained Language Models for Sentiment Analysis in Low-Resourced Languages

被引:0
作者
Mabokela, Koena Ronny [1 ,4 ]
Primus, Mpho [2 ]
Celik, Turgay [3 ]
机构
[1] Univ Johannesburg, Bunting Rd Campus, ZA-2092 Auckland Pk, South Africa
[2] Univ Johannesburg, Dept Zool, Kingsway Campus, Auckland Pk, South Africa
[3] Univ Agder, Ctr Artificial Intelligence Res CAIR, Dept ICT, Grimstad, N-4879, Norway
[4] Univ Witwatersrand, Sch Elect & Informat Engn, ZA-2000 Johannesburg, South Africa
基金
新加坡国家研究基金会;
关键词
explainable AI; sentiment analysis; African languages; Afrocentric models; pre-trained models; transformer models; CLASSIFICATION;
D O I
10.3390/bdcc8110160
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sentiment analysis is a crucial tool for measuring public opinion and understanding human communication across digital social media platforms. However, due to linguistic complexities and limited data or computational resources, it is under-represented in many African languages. While state-of-the-art Afrocentric pre-trained language models (PLMs) have been developed for various natural language processing (NLP) tasks, their applications in eXplainable Artificial Intelligence (XAI) remain largely unexplored. In this study, we propose a novel approach that combines Afrocentric PLMs with XAI techniques for sentiment analysis. We demonstrate the effectiveness of incorporating attention mechanisms and visualization techniques in improving the transparency, trustworthiness, and decision-making capabilities of transformer-based models when making sentiment predictions. To validate our approach, we employ the SAfriSenti corpus, a multilingual sentiment dataset for South African under-resourced languages, and perform a series of sentiment analysis experiments. These experiments enable comprehensive evaluations, comparing the performance of Afrocentric models against mainstream PLMs. Our results show that the Afro-XLMR model outperforms all other models, achieving an average F1-score of 71.04% across five tested languages, and the lowest error rate among the evaluated models. Additionally, we enhance the interpretability and explainability of the Afro-XLMR model using Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP). These XAI techniques ensure that sentiment predictions are not only accurate and interpretable but also understandable, fostering trust and reliability in AI-driven NLP technologies, particularly in the context of African languages.
引用
收藏
页数:25
相关论文
共 50 条
  • [1] Deep learning and multilingual sentiment analysis on social media data: An overview
    Aguero-Torales, Marvin M.
    Salas, Jose I. Abreu
    Lopez-Herrera, Antonio G.
    [J]. APPLIED SOFT COMPUTING, 2021, 107 (107)
  • [2] Alabi JO, 2022, P 29 INT C COMP LING, P4336
  • [3] Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence
    Ali, Sajid
    Abuhmed, Tamer
    El-Sappagh, Shaker
    Muhammad, Khan
    Alonso-Moral, Jose M.
    Confalonieri, Roberto
    Guidotti, Riccardo
    Del Ser, Javier
    Diaz-Rodriguez, Natalia
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2023, 99
  • [4] Sentiment Analysis in Low-Resource Settings: A Comprehensive Review of Approaches, Languages, and Data Sources
    Aliyu, Yusuf
    Sarlan, Aliza
    Danyaro, Kamaluddeen Usman
    Rahman, Abdullahi Sani B. A.
    Abdullahi, Mujaheed
    [J]. IEEE ACCESS, 2024, 12 : 66883 - 66909
  • [5] Amin O., 2022, P EUR C PHM SOC 2022, P9
  • [6] [Anonymous], SUSTAINABLE DEV GOAL
  • [7] Arras L., 2017, P 8 WORKSH COMP APPR, P159
  • [8] Arunkumar P. M., 2019, International Journal of Data Analysis Techniques and Strategies, V11, P328
  • [9] Arwa D., 2023, IEEE Trans. Affect. Comput, V15, P837
  • [10] Explainable Sentiment Analysis: A Hierarchical Transformer-Based Extractive Summarization Approach
    Bacco, Luca
    Cimino, Andrea
    Dell'Orletta, Felice
    Merone, Mario
    [J]. ELECTRONICS, 2021, 10 (18)