PCAIME: Principal Component Analysis-Enhanced Approximate Inverse Model Explanations Through Dimensional Decomposition and Expansion

被引:0
作者
Nakanishi, Takafumi [1 ]
机构
[1] Musashino Univ, Dept Data Sci, Tokyo 1358181, Japan
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Correlation; Analytical models; Principal component analysis; Feature extraction; Artificial intelligence; Estimation; Dimensionality reduction; Explainable AI; Approximation methods; Approximate inverse model explanation; explainable artificial intelligence; feature correlation; feature importance; model explanation; principal component analysis; principal component analysis-enhanced approximate inverse model explanation;
D O I
10.1109/ACCESS.2024.3450299
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Complex "black-box" artificial intelligence (AI) models are interpreted using interpretive machine learning and explainable AI (XAI); therefore, assessing the importance of global and local features is crucial. The previously proposed approximate inverse model explanation (AIME) offers unified explanations of global and local feature importance. This study builds on that foundation by focusing on assessing feature contributions while also examining the multicollinearity and correlation among features in XAI-derived explanations. Given that advanced AI and machine learning models inherently manage multicollinearity and correlations among features, XAI methods must be employed to clearly explain these dynamics and fully understand the estimation results and behaviors of the models. This study proposes a new technique called principal component analysis-enhanced approximate inverse model explanation (PCAIME) that extends AIME and implements dimensionality decomposition and expansion capabilities, such as PCA. PCAIME derives contributing features, demonstrates the multicollinearity and correlation between features and their contributions through a two-dimensional heat map of principal components, and reveals selected features after dimensionality reduction. Experiments using wine quality and automobile mile-per-gallon datasets were conducted to compare the effectiveness of local interpretable model-agnostic explanations, AIME, and PCAIME, particularly in analyzing local feature importance. PCAIME outperformed its counterparts by effectively revealing feature correlations and providing a more comprehensive perspective of feature interactions. Significantly, PCAIME estimated the global and local feature importance and offered novel insights by simultaneously visualizing feature correlations through heat maps. PCAIME could improve the understanding of complex algorithms and datasets, promoting transparent AI and machine learning in healthcare, finance, and public policy.
引用
收藏
页码:121093 / 121113
页数:21
相关论文
共 59 条
  • [1] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    [J]. IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [2] Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence
    Ali, Sajid
    Abuhmed, Tamer
    El-Sappagh, Shaker
    Muhammad, Khan
    Alonso-Moral, Jose M.
    Confalonieri, Roberto
    Guidotti, Riccardo
    Del Ser, Javier
    Diaz-Rodriguez, Natalia
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2023, 99
  • [3] [Anonymous], 2019, LOFO Importance
  • [4] E-XAI: Evaluating Black-Box Explainable AI Frameworks for Network Intrusion Detection
    Arreche, Osvaldo
    Guntur, Tanish R.
    Roberts, Jack W.
    Abdallah, Mustafa
    [J]. IEEE ACCESS, 2024, 12 : 23954 - 23988
  • [5] Shapley Chains: Extending Shapley Values to Classifier Chains
    Ayad, Celia Wafa
    Bonnier, Thomas
    Bosch, Benjamin
    Read, Jesse
    [J]. DISCOVERY SCIENCE (DS 2022), 2022, 13601 : 541 - 555
  • [6] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115
  • [7] Digital medicine and the curse of dimensionality
    Berisha, Visar
    Krantsevich, Chelsea
    Hahn, P. Richard
    Hahn, Shira
    Dasarathy, Gautam
    Turaga, Pavan
    Liss, Julie
    [J]. NPJ DIGITAL MEDICINE, 2021, 4 (01)
  • [8] Bramhall Steven, 2020, SMU Data Sci Rev, P4
  • [9] A survey on XAI and natural language explanations
    Cambria, Erik
    Malandri, Lorenzo
    Mercorio, Fabio
    Mezzanzanica, Mario
    Nobani, Navid
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (01)
  • [10] Machine Learning Interpretability: A Survey on Methods and Metrics
    Carvalho, Diogo, V
    Pereira, Eduardo M.
    Cardoso, Jaime S.
    [J]. ELECTRONICS, 2019, 8 (08)