Explainable AI to understand study interest of engineering students

被引:4
|
作者
Ghosh, Sourajit [1 ]
Kamal, Md. Sarwar [2 ]
Chowdhury, Linkon [3 ]
Neogi, Biswarup [1 ]
Dey, Nilanjan [4 ]
Sherratt, Robert Simon [5 ]
机构
[1] JIS Univ, Comp Sci & Engn, Kolkata, India
[2] Univ Technol Sydney, Fac Engn & Informat Technol, Sydney, Australia
[3] East Delta Univ Technol, Comp Sci & Engn, Chattogram, Bangladesh
[4] Techno Int New Town, Comp Sci & Engn, Kolkata, India
[5] Univ Reading, Dept Biomed Engn, Reading, England
关键词
Explainable AI; Belief rule base; SP-LIME; PCA;
D O I
10.1007/s10639-023-11943-x
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Students are the future of a nation. Personalizing student interests in higher education courses is one of the biggest challenges in higher education. Various AI and ML approaches have been used to study student behaviour. Existing AI and ML algorithms are used to identify features for various fields, such as behavioural analysis, economic analysis, image processing, and personalized medicine. However, there are major concerns about the interpretability and understandability of the decision made by a model. This is because most AI algorithms are black-box models. In this study, explain- able AI (XAI) aims to break the black box nature of an algorithm. In this study, XAI is used to identify engineering students' interests, and BRB and SP-LIME are used to explain which attributes are critical to their studies. We also used (PCA) for feature selection to identify the student cohort. Clustering the cohort helps to analyse the between influential features in terms of engineering discipline selection. The results show that there are some valuable factors that influence their study and, ultimately, the future of a nation.
引用
收藏
页码:4657 / 4672
页数:16
相关论文
共 50 条
  • [21] A multiple case study to understand how students experience science and engineering practices
    Schaben, Chris
    Andersson, Justin
    Cutucache, Christine
    FRONTIERS IN EDUCATION, 2022, 7
  • [22] Segmenting female students' perceptions about Fintech using Explainable AI
    Adam, Christos
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [23] Toward Explainable Users: Using NLP to Enable AI to Understand Users' Perceptions of Cyber Attacks
    Abri, Faranak
    Gutierrez, Luis Felipe
    Kulkarni, Chaitra T.
    Namin, Akbar Siami
    Jones, Keith S.
    2021 IEEE 45TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE (COMPSAC 2021), 2021, : 1703 - 1710
  • [24] From "Explainable AI" to "Graspable AI"
    Ghajargar, Maliheh
    Bardzell, Jeffrey
    Renner, Alison Smith
    Krogh, Peter Gall
    Hook, Kristina
    Cuartielles, David
    Boer, Laurens
    Wiberg, Mikael
    PROCEEDINGS OF THE FIFTEENTH INTERNATIONAL CONFERENCE ON TANGIBLE, EMBEDDED, AND EMBODIED INTERACTION, TEI 2021, 2021,
  • [25] Explainable AI (ex-AI)
    Holzinger, Andreas
    Informatik-Spektrum, 2018, 41 (02) : 138 - 143
  • [26] Introduction to Explainable AI
    Liao, Q. Vera
    Singh, Moninder
    Zhang, Yunfeng
    Bellamy, Rachel K. E.
    EXTENDED ABSTRACTS OF THE 2021 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'21), 2021,
  • [27] Explainable AI for RAMS
    Zaman, Navid
    Apostolou, Evan
    Li, Yan
    Oister, Ken
    2022 68TH ANNUAL RELIABILITY AND MAINTAINABILITY SYMPOSIUM (RAMS 2022), 2022,
  • [28] Introduction to Explainable AI
    Liao, Q. Vera
    Singh, Moninder
    Zhang, Yunfeng
    Bellamy, Rachel K. E.
    CHI'20: EXTENDED ABSTRACTS OF THE 2020 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2020,
  • [29] Chess and explainable AI
    Bjornsson, Yngvi
    ICGA JOURNAL, 2024, 46 (02) : 67 - 75
  • [30] Explaining explainable AI
    Hind, Michael
    XRDS: Crossroads, 2019, 25 (03): : 16 - 19