Explainable artificial intelligence enhances the ecological interpretability of black-box species distribution models

被引:79
|
作者
Ryo, Masahiro [1 ,2 ,3 ]
Angelov, Boyan [4 ]
Mammola, Stefano [5 ,6 ]
Kass, Jamie M. [7 ]
Benito, Blas M. [8 ]
Hartig, Florian [9 ]
机构
[1] Free Univ Berlin, Inst Biol, Berlin, Germany
[2] Berlin Brandenburg Inst Adv Biodivers Res BBIB, Berlin, Germany
[3] Leibniz Ctr Agr Landscape Res ZALF, Muncheberg, Germany
[4] Assoc Comp Machinery ACM, New York, NY USA
[5] Natl Res Council CNR, Mol Ecol Grp MEG, Water Res Inst IRSA, Verbania, Italy
[6] Univ Helsinki, Lab Integrat Biodivers Res LIBRe, Finnish Museum Nat Hist LUOMUS, Helsinki, Finland
[7] Okinawa Inst Sci & Technol Grad Univ, Biodivers & Biocomplex Unit, Okinawa, Japan
[8] Univ Alicante, Inst Environm Studies Ramon Margalef, Dept Ecol & Multidisciplinary, Alicante, Spain
[9] Univ Regensburg, Fac Biol & Preclin Med, Theoret Ecol, Regensburg, Germany
基金
欧盟地平线“2020”; 日本学术振兴会; 欧洲研究理事会;
关键词
ecological modeling; explainable artificial intelligence; habitat suitability modeling; interpretable machine learning; species distribution model; xAI;
D O I
10.1111/ecog.05360
中图分类号
X176 [生物多样性保护];
学科分类号
090705 ;
摘要
Species distribution models (SDMs) are widely used in ecology, biogeography and conservation biology to estimate relationships between environmental variables and species occurrence data and make predictions of how their distributions vary in space and time. During the past two decades, the field has increasingly made use of machine learning approaches for constructing and validating SDMs. Model accuracy has steadily increased as a result, but the interpretability of the fitted models, for example the relative importance of predictor variables or their causal effects on focal species, has not always kept pace. Here we draw attention to an emerging subdiscipline of artificial intelligence, explainable AI (xAI), as a toolbox for better interpreting SDMs. xAI aims at deciphering the behavior of complex statistical or machine learning models (e.g. neural networks, random forests, boosted regression trees), and can produce more transparent and understandable SDM predictions. We describe the rationale behind xAI and provide a list of tools that can be used to help ecological modelers better understand complex model behavior at different scales. As an example, we perform a reproducible SDM analysis in R on the African elephant and showcase some xAI tools such as local interpretable model-agnostic explanation (LIME) to help interpret local-scale behavior of the model. We conclude with what we see as the benefits and caveats of these techniques and advocate for their use to improve the interpretability of machine learning SDMs.
引用
收藏
页码:199 / 205
页数:7
相关论文
共 50 条
  • [21] On Membership of Black-box or White-box of Artificial Neural Network Models
    Wu, Z. F.
    Li, Jin
    Cai, M. Y.
    Zhang, W. J.
    Lin, Y.
    PROCEEDINGS OF THE 2016 IEEE 11TH CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), 2016, : 1400 - 1404
  • [22] Explainable Black Box Models
    De Mulder, Wim
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 1, 2023, 542 : 573 - 587
  • [23] Black Box Attacks on Explainable Artificial Intelligence(XAI) methods in Cyber Security
    Kuppa, Aditya
    Le-Khac, Nhien-An
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [24] Opening the Black Box: A systematic review on explainable artificial intelligence in remote sensing
    Hoehl, Adrian
    Obadic, Ivica
    Fernandez-Torres, Miguel-Angel
    Najjar, Hiba
    Oliveira, Dario Augusto Borges
    Akata, Zeynep
    Dengel, Andreas
    Zhu, Xiao Xiang
    IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE, 2024, 12 (04) : 261 - 304
  • [25] Opening the black box: challenges and opportunities regarding interpretability of artificial intelligence in emergency medicine
    Rajaram, Akshay
    Li, Henry
    Holodinsky, Jessalyn K.
    Hall, Justin N.
    Grant, Lars
    Goel, Gautam
    Hayward, Jake
    Mehta, Shaun
    Ben-Yakov, Maxim
    Pelletier, Elyse Berger
    Scheuermeyer, Frank
    Ho, Kendall
    Kareemi, Hashim
    CANADIAN JOURNAL OF EMERGENCY MEDICINE, 2025, 27 (02) : 83 - 86
  • [26] A brief history of artificial intelligence embryo selection: from black-box to glass-box
    Lee, Tammy
    Natalwala, Jay
    Chapple, Vincent
    Liu, Yanhe
    HUMAN REPRODUCTION, 2024, 39 (02) : 285 - 292
  • [27] Application of black-box models based on artificial intelligence for the prediction of chlorine and TTHMs in the trunk network of Bogota, Colombia
    Enriquez, Laura
    Gonzalez, Laura
    Saldarriaga, Juan G.
    JOURNAL OF HYDROINFORMATICS, 2023, 25 (04) : 1396 - 1412
  • [28] The need for balancing 'black box' systems and explainable artificial intelligence: A necessary implementation in radiology
    De-Giorgio, Fabio
    Benedetti, Beatrice
    Mancino, Matteo
    Sala, Evis
    Pascali, Vincenzo L.
    EUROPEAN JOURNAL OF RADIOLOGY, 2025, 185
  • [29] Unveiling the black box: A systematic review of Explainable Artificial Intelligence in medical image analysis
    Muhammad, Dost
    Bendechache, Malika
    COMPUTATIONAL AND STRUCTURAL BIOTECHNOLOGY JOURNAL, 2024, 24 : 542 - 560
  • [30] Interpretable Companions for Black-Box Models
    Pan, Danqing
    Wang, Tong
    Hara, Satoshi
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2444 - 2453