Explainable artificial intelligence in geoscience: A glimpse into the future of landslide susceptibility modeling

被引:49
作者
Dahal, Ashok [1 ]
Lombardo, Luigi [1 ]
机构
[1] Univ Twente, Fac Geoinformat Sci & Earth Observ ITC, POB 217, NL-7500 AE Enschede, Netherlands
关键词
Landslide modeling; Explainable deep learning; Nepal Earthquake; Web-GIS; Transparent modeling; LOGISTIC-REGRESSION; NEURAL-NETWORKS; QUANTITATIVE-ANALYSIS; HAZARD; REGION; NORTH; INFORMATION; PERFORMANCE; TECHNOLOGY; VALIDATION;
D O I
10.1016/j.cageo.2023.105364
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
For decades, the distinction between statistical models and machine learning ones has been clear. The former are optimized to produce interpretable results, whereas the latter seeks to maximize the predictive performance of the task at hand. This is valid for any scientific field and for any method belonging to the two categories mentioned above. When attempting to predict natural hazards, this difference has lead researchers to make drastic decisions on which aspect to prioritize, a difficult choice to make. In fact, one would always seek the highest performance because at higher performances correspond better decisions for disaster risk reduction. However, scientists also wish to understand the results, as a way to rely on the tool they developed. Today, very recent development in deep learning have brought forward a new generation of interpretable artificial intelli-gence, where the prediction power typical of machine learning tools is equipped with a level of explanatory power typical of statistical approaches. In this work, we attempt to demonstrate the capabilities of this new generation of explainable artificial intelligence (XAI). To do so, we take the landslide susceptibility context as reference. Specifically, we build an XAI trained to model landslides occurred in response to the Gorkha earth-quake (April 25, 2015), providing an educational overview of the model design and its querying opportunities. The results show high performance, with an AUC score of 0.89, while the interpretability can be extended to the probabilistic result assigned to single mapping units.
引用
收藏
页数:11
相关论文
共 50 条
[31]   Validation of an artificial neural network model for landslide susceptibility mapping [J].
Choi, Jaewon ;
Oh, Hyun-Joo ;
Won, Joong-Sun ;
Lee, Saro .
ENVIRONMENTAL EARTH SCIENCES, 2010, 60 (03) :473-483
[32]   Estimating landslide susceptibility through a artificial neural network classifier [J].
Tsangaratos, Paraskevas ;
Benardos, Andreas .
NATURAL HAZARDS, 2014, 74 (03) :1489-1516
[33]   Towards white box modeling of compressive strength of sustainable ternary cement concrete using explainable artificial intelligence (XAI) [J].
Ibrahim, Syed Muhammad ;
Ansari, Saad Shamim ;
Hasan, Syed Danish .
APPLIED SOFT COMPUTING, 2023, 149
[34]   Temporal Variations in Landslide Distributions Following Extreme Events: Implications for Landslide Susceptibility Modeling [J].
Jones, J. N. ;
Boulton, S. J. ;
Bennett, G. L. ;
Stokes, M. ;
Whitworth, M. R. Z. .
JOURNAL OF GEOPHYSICAL RESEARCH-EARTH SURFACE, 2021, 126 (07)
[35]   Conventional data-driven landslide susceptibility models may only tell us half of the story: Potential underestimation of landslide impact areas depending on the modeling design [J].
Lima, Pedro ;
Steger, Stefan ;
Glade, Thomas ;
Mergili, Martin .
GEOMORPHOLOGY, 2023, 430
[36]   Artificial Intelligence for Outcome Modeling in Radiotherapy [J].
Cui, Sunan ;
Hope, Andrew ;
Dilling, Thomas J. ;
Dawson, Laura A. ;
Ten Haken, Randall ;
El Naqa, Issam .
SEMINARS IN RADIATION ONCOLOGY, 2022, 32 (04) :351-364
[37]   Using the integrated application of computational intelligence for landslide susceptibility modeling in East Azerbaijan Province, Iran [J].
Abdollahizad, Solmaz ;
Balafar, Mohammad Ali ;
Feizizadeh, Bakhtiar ;
Babazadeh Sangar, Amin ;
Samadzamini, Karim .
APPLIED GEOMATICS, 2023, 15 (01) :109-125
[38]   Untangling hybrid hydrological models with explainable artificial intelligence [J].
Althoff, Daniel ;
Bazame, Helizani Couto ;
Nascimento, Jessica Garcia .
H2OPEN JOURNAL, 2021, 4 (01) :13-28
[39]   Explainable Artificial Intelligence for Safe Intraoperative Decision Support [J].
Gordon, Lauren ;
Grantcharov, Teodor ;
Rudzicz, Frank .
JAMA SURGERY, 2019, 154 (11) :1064-1065
[40]   The SAGE Framework for Explaining Context in Explainable Artificial Intelligence [J].
Mill, Eleanor ;
Garn, Wolfgang ;
Ryman-Tubb, Nick ;
Turner, Chris .
APPLIED ARTIFICIAL INTELLIGENCE, 2024, 38 (01)