Applying Genetic Programming to Improve Interpretability in Machine Learning Models

被引:0
作者
Ferreira, Leonardo Augusto [1 ]
Guimaraes, Frederico Gadelha [1 ]
Silva, Rodrigo [2 ]
机构
[1] Univ Fed Minas Gerais, UFMG, Dept Elect Engn, Machine Intelligence & Data Sci MINDS Lab, BR-31270000 Belo Horizonte, MG, Brazil
[2] Univ Fed Ouro Preto, UFOP, Dept Comp Sci, BR-35400000 Ouro Preto, MG, Brazil
来源
2020 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC) | 2020年
关键词
Interpretability; Machine Learning; Genetic Programming; Explainability;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Explainable Artificial Intelligence (or xAI) has become an important research topic in the fields of Machine Learning and Deep Learning. In this paper, we propose a Genetic Programming (GP) based approach, name Genetic Programming Explainer (GPX), to the problem of explaining decisions computed by AI systems. The method generates a noise set located in the neighborhood of the point of interest, whose prediction should be explained, and fits a local explanation model for the analyzed sample. The tree structure generated by GPX provides a comprehensible analytical, possibly non-linear, expression which reflects the local behavior of the complex model. We considered three machine learning techniques that can be recognized as complex black-box models: Random Forest, Deep Neural Network and Support Vector Machine in twenty data sets for regression and classifications problems. Our results indicate that the GPX is able to produce more accurate understanding of complex models than the state of the art. The results validate the proposed approach as a novel way to deploy GP to improve interpretability.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] The performance-interpretability trade-off: a comparative study of machine learning models
    André Assis
    Jamilson Dantas
    Ermeson Andrade
    Journal of Reliable Intelligent Environments, 2025, 11 (1)
  • [22] Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning
    Kaur, Harmanpreet
    Nori, Harsha
    Jenkins, Samuel
    Caruana, Rich
    Wallach, Hanna
    Vaughan, Jennifer Wortman
    PROCEEDINGS OF THE 2020 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'20), 2020,
  • [23] Beyond model interpretability: socio-structural explanations in machine learning
    Smart, Andrew
    Kasirzadeh, Atoosa
    AI & SOCIETY, 2024, : 2045 - 2053
  • [24] The importance of interpretability and visualization in machine learning for applications in medicine and health care
    Vellido, Alfredo
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (24) : 18069 - 18083
  • [25] The importance of interpretability and visualization in machine learning for applications in medicine and health care
    Alfredo Vellido
    Neural Computing and Applications, 2020, 32 : 18069 - 18083
  • [26] Improving the Interpretability of Asset Pricing Models by Explainable AI: A Machine Learning-based Approach
    Ferrara, Massimiliano
    Ciano, Tiziana
    ECONOMIC COMPUTATION AND ECONOMIC CYBERNETICS STUDIES AND RESEARCH, 2024, 58 (04) : 5 - 19
  • [27] Applications of interpretability in deep learning models for ophthalmology
    Hanif, Adam M.
    Beqiri, Sara
    Keane, Pearse A.
    Campbell, J. Peter
    CURRENT OPINION IN OPHTHALMOLOGY, 2021, 32 (05) : 452 - 458
  • [28] Genetic Programming Based Automated Machine Learning in Classifying ESG Performances
    Abd Rahman, Abdullah Sani
    Masrom, Suraya
    Rahman, Rahayu Abdul
    Ibrahim, Roslina
    Gilal, Abdul Rehman
    IEEE ACCESS, 2024, 12 : 59612 - 59629
  • [29] Survey on Techniques, Applications and Security of Machine Learning Interpretability
    Ji S.
    Li J.
    Du T.
    Li B.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2019, 56 (10): : 2071 - 2096
  • [30] Applying machine learning to automatically assess scientific models
    Zhai, Xiaoming
    He, Peng
    Krajcik, Joseph
    JOURNAL OF RESEARCH IN SCIENCE TEACHING, 2022, 59 (10) : 1765 - 1794