Illuminating the black box: An interpretable machine learning based on ensemble trees

被引:0
|
作者
Lee, Yue-Shi [1 ]
Yen, Show-Jane [1 ]
Jiang, Wendong [2 ]
Chen, Jiyuan [3 ]
Chang, Chih-Yung [2 ]
机构
[1] Ming Chuan Univ, Dept Comp Sci & Informat Engn, Taoyuan City 333, Taiwan
[2] Tamkang Univ, Dept Comp Sci & Informat Engn, New Taipei 25137, Taiwan
[3] Univ Melbourne, Fac Engn & Informat Technol, Parkville, Vic 3052, Australia
关键词
Interpretable machine learning; Machine learning; Explanation;
D O I
10.1016/j.eswa.2025.126720
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning has achieved significant success in the analysis of unstructured data, but its inherent black-box nature has led to numerous limitations in security-sensitive domains. Although many existing interpretable machine learning methods can partially address this issue, they often face challenges such as model limitations, interpretability randomness, and a lack of global interpretability. To address these challenges, this paper introduces an innovative interpretable ensemble tree method, EnEXP. This method generates a sample set by applying fixed masking perturbation to individual samples, then constructs multiple decision trees using bagging and boosting techniques and interprets them based on the importance outputs of these trees, thereby achieving a global interpretation of the entire dataset through the aggregation of all sample insights. Experimental results demonstrate that EnEXP possesses superior explanatory power compared to other interpretable methods. In text processing experiments, the bag-of-words model optimized by EnEXP outperformed the GPT-3 Ada fine-tuned model.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] Tree-based interpretable machine learning of the thermodynamic phases
    Yang, Jintao
    Cao, Junpeng
    PHYSICS LETTERS A, 2021, 412
  • [42] Interpretable Machine Learning Models for PISA Results in Mathematics
    Gomez-Talal, Ismael
    Bote-Curiel, Luis
    Luis Rojo-Alvarez, Jose
    IEEE ACCESS, 2025, 13 : 27371 - 27397
  • [43] Study on a confidence machine learning method based on ensemble learning
    Jiang, Fang Chun
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2017, 20 (04): : 3357 - 3368
  • [44] Study on a confidence machine learning method based on ensemble learning
    Fang Chun Jiang
    Cluster Computing, 2017, 20 : 3357 - 3368
  • [45] MACHINE-LEARNING IN OPTIMIZATION OF EXPENSIVE BLACK-BOX FUNCTIONS
    Tenne, Yoel
    INTERNATIONAL JOURNAL OF APPLIED MATHEMATICS AND COMPUTER SCIENCE, 2017, 27 (01) : 105 - 118
  • [46] Optimizing carbon source addition to control surplus sludge yield via machine learning-based interpretable ensemble model
    Li, Bowen
    Liu, Li
    Xu, Zikang
    Li, Kexun
    ENVIRONMENTAL RESEARCH, 2025, 267
  • [47] Prediction of Precious Metal Index Based on Ensemble Learning and SHAP Interpretable Method
    Zhang, Yanbo
    Liang, Mengkun
    Ou, Haiying
    COMPUTATIONAL ECONOMICS, 2024, 64 (06) : 3243 - 3278
  • [48] A Survey of Interpretable Machine Learning Methods
    Wang, Yan
    Tuerhong, Gulanbaier
    2022 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY, HUMAN-COMPUTER INTERACTION AND ARTIFICIAL INTELLIGENCE, VRHCIAI, 2022, : 232 - 237
  • [49] Crop yield prediction via explainable AI and interpretable machine learning: Dangers of black box models for evaluating climate change impacts on crop yield
    Hu, Tongxi
    Zhang, Xuesong
    Bohrer, Gil
    Liu, Yanlan
    Zhou, Yuyu
    Martin, Jay
    Li, Yang
    Zhao, Kaiguang
    AGRICULTURAL AND FOREST METEOROLOGY, 2023, 336
  • [50] Interpretable machine learning for materials design
    James Dean
    Matthias Scheffler
    Thomas A. R. Purcell
    Sergey V. Barabash
    Rahul Bhowmik
    Timur Bazhirov
    Journal of Materials Research, 2023, 38 : 4477 - 4496