Towards explainable artificial intelligence through expert-augmented supervised feature selection

被引:4
|
作者
Rabiee, Meysam [1 ,6 ]
Mirhashemi, Mohsen [2 ]
Pangburn, Michael S. [3 ]
Piri, Saeed [3 ]
Delen, Dursun [4 ,5 ]
机构
[1] Univ Colorado Denver, Business Sch, Denver, CO 80202 USA
[2] Univ Tehran, Fac New Sci & Technol, Tehran, Iran
[3] Univ Oregon, Lundquist Coll Business, Eugene, OR 97403 USA
[4] Oklahoma State Univ, Ctr Hlth Syst Innovat, Spears Sch Business, Stillwater, OK 74078 USA
[5] Istinye Univ, Fac Engn & Nat Sci, Dept Ind Engn, TR-34396 Istanbul, Turkiye
[6] Univ Colorado Denver, CU Denver Business Sch, Business Analyt, 1475 Lawrence St 4004, Denver, CO 80202 USA
关键词
Explainable Artificial Intelligence (XAI); Supervised feature selection; Expert-augmented framework; Genetic algorithm; Machine learning; Feature categorization; GENETIC ALGORITHM; SYSTEMS;
D O I
10.1016/j.dss.2024.114214
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a comprehensive framework for expert-augmented supervised feature selection, addressing pre-processing, in-processing, and post-processing aspects of Explainable Artificial Intelligence (XAI). As part of pre-processing XAI, we introduce the Probabilistic Solution Generator through the Information Fusion (PSGIF) algorithm, leveraging ensemble techniques to enhance the exploration and exploitation capabilities of a Genetic Algorithm (GA). Balancing explainability and prediction accuracy, we formulate two multi -objective optimization models that empower expert(s) to specify a maximum acceptable sacrifice percentage. This approach enhances explainability by reducing the number of selected features and prioritizing those considered more relevant from the domain expert ' s perspective. This contribution aligns with in-processing XAI, incorporating expert opinions into the feature selection process as a multi -objective problem. Traditional feature selection techniques lack the capability to efficiently search the solution space considering our explainability-focused objective function. To overcome this, we leverage the Genetic Algorithm (GA), a powerful metaheuristic algorithm, optimizing its parameters through Bayesian optimization. For post-processing XAI, we present the Posterior Ensemble Algorithm (PEA), estimating the predictive power of features. PEA enables a nuanced comparison between objective and subjective importance, identifying features as underrated, overrated, or appropriately rated. We evaluate the performance of our proposed GAs on 16 publicly available datasets, focusing on prediction accuracy in a single objective setting. Moreover, we test our multi -objective model on a classification dataset to show the applicability and effectiveness of our framework. Overall, this paper provides a holistic and nuanced approach to explainable feature selection, offering decision-makers a comprehensive understanding of feature importance.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Designing a feature selection method based on explainable artificial intelligence
    Zacharias, Jan
    von Zahn, Moritz
    Chen, Johannes
    Hinz, Oliver
    ELECTRONIC MARKETS, 2022, 32 (04) : 2159 - 2184
  • [2] Designing a feature selection method based on explainable artificial intelligence
    Jan Zacharias
    Moritz von Zahn
    Johannes Chen
    Oliver Hinz
    Electronic Markets, 2022, 32 : 2159 - 2184
  • [3] Leveraging Explainable Artificial Intelligence in Solar Photovoltaic Mappings: Model Explanations and Feature Selection
    Gomes, Eduardo
    Esteves, Augusto
    Morais, Hugo
    Pereira, Lucas
    ENERGIES, 2025, 18 (05)
  • [4] Explainable artificial intelligence and advanced feature selection methods for predicting gas concentration in longwall mining
    Chang, Haoqian
    Wang, Xiangqian
    Cristea, Alexandra I.
    Meng, Xiangrui
    Hu, Zuxiang
    Pan, Ziqi
    INFORMATION FUSION, 2025, 118
  • [5] Surface electromyography based explainable Artificial Intelligence fusion framework for feature selection of hand gesture recognition
    Gehlot, Naveen
    Jena, Ashutosh
    Vijayvargiya, Ankit
    Kumar, Rajesh
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 137
  • [6] Robust Network Intrusion Detection Through Explainable Artificial Intelligence (XAI)
    Barnard, Pieter
    Marchetti, Nicola
    Dasilva, Luiz A.
    IEEE Networking Letters, 2022, 4 (03): : 167 - 171
  • [7] Enhancing Explainable Artificial Intelligence: Using Adaptive Feature Weight Genetic Explanation (AFWGE) with Pearson Correlation to Identify Crucial Feature Groups
    Aljalaud, Ebtisam
    Hosny, Manar
    MATHEMATICS, 2024, 12 (23)
  • [8] Towards Semantic Integration for Explainable Artificial Intelligence in the Biomedical Domain
    Pesquita, Catia
    HEALTHINF: PROCEEDINGS OF THE 14TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES - VOL. 5: HEALTHINF, 2021, : 747 - 753
  • [9] Feature Selection in Cancer Classification: Utilizing Explainable Artificial Intelligence to Uncover Influential Genes in Machine Learning Models
    Dalmolin, Matheus
    Azevedo, Karolayne S.
    de Souza, Luisa C.
    de Farias, Caroline B.
    Lichtenfels, Martina
    Fernandes, Marcelo A. C.
    AI, 2025, 6 (01) : 2 - 0
  • [10] Explainable artificial intelligence for manufacturing cost estimation and machining feature visualization
    Yoo, Soyoung
    Kang, Namwoo
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 183