Explainable artificial intelligence in geoscience: A glimpse into the future of landslide susceptibility modeling

被引:48
|
作者
Dahal, Ashok [1 ]
Lombardo, Luigi [1 ]
机构
[1] Univ Twente, Fac Geoinformat Sci & Earth Observ ITC, POB 217, NL-7500 AE Enschede, Netherlands
关键词
Landslide modeling; Explainable deep learning; Nepal Earthquake; Web-GIS; Transparent modeling; LOGISTIC-REGRESSION; NEURAL-NETWORKS; QUANTITATIVE-ANALYSIS; HAZARD; REGION; NORTH; INFORMATION; PERFORMANCE; TECHNOLOGY; VALIDATION;
D O I
10.1016/j.cageo.2023.105364
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
For decades, the distinction between statistical models and machine learning ones has been clear. The former are optimized to produce interpretable results, whereas the latter seeks to maximize the predictive performance of the task at hand. This is valid for any scientific field and for any method belonging to the two categories mentioned above. When attempting to predict natural hazards, this difference has lead researchers to make drastic decisions on which aspect to prioritize, a difficult choice to make. In fact, one would always seek the highest performance because at higher performances correspond better decisions for disaster risk reduction. However, scientists also wish to understand the results, as a way to rely on the tool they developed. Today, very recent development in deep learning have brought forward a new generation of interpretable artificial intelli-gence, where the prediction power typical of machine learning tools is equipped with a level of explanatory power typical of statistical approaches. In this work, we attempt to demonstrate the capabilities of this new generation of explainable artificial intelligence (XAI). To do so, we take the landslide susceptibility context as reference. Specifically, we build an XAI trained to model landslides occurred in response to the Gorkha earth-quake (April 25, 2015), providing an educational overview of the model design and its querying opportunities. The results show high performance, with an AUC score of 0.89, while the interpretability can be extended to the probabilistic result assigned to single mapping units.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Explainable artificial intelligence in pathology
    Klauschen, Frederick
    Dippel, Jonas
    Keyl, Philipp
    Jurmeister, Philipp
    Bockmayr, Michael
    Mock, Andreas
    Buchstab, Oliver
    Alber, Maximilian
    Ruff, Lukas
    Montavon, Gregoire
    Mueller, Klaus-Robert
    PATHOLOGIE, 2024, 45 (02): : 133 - 139
  • [32] Explainable and Trustworthy Artificial Intelligence
    Alonso-Moral, Jose Maria
    Mencar, Corrado
    Ishibuchi, Hisao
    IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2022, 17 (01) : 14 - 15
  • [33] Review of Explainable Artificial Intelligence
    Zhao, Yanyu
    Zhao, Xiaoyong
    Wang, Lei
    Wang, Ningning
    Computer Engineering and Applications, 2023, 59 (14) : 1 - 14
  • [34] Explainable and responsible artificial intelligence
    Christian Meske
    Babak Abedin
    Mathias Klier
    Fethi Rabhi
    Electronic Markets, 2022, 32 : 2103 - 2106
  • [35] Explainable artificial intelligence in ophthalmology
    Tan, Ting Fang
    Dai, Peilun
    Zhang, Xiaoman
    Jin, Liyuan
    Poh, Stanley
    Hong, Dylan
    Lim, Joshua
    Lim, Gilbert
    Teo, Zhen Ling
    Liu, Nan
    Ting, Daniel Shu Wei
    CURRENT OPINION IN OPHTHALMOLOGY, 2023, 34 (05) : 422 - 430
  • [36] A Review of Explainable Artificial Intelligence
    Lin, Kuo-Yi
    Liu, Yuguang
    Li, Li
    Dou, Runliang
    ADVANCES IN PRODUCTION MANAGEMENT SYSTEMS: ARTIFICIAL INTELLIGENCE FOR SUSTAINABLE AND RESILIENT PRODUCTION SYSTEMS, APMS 2021, PT IV, 2021, 633 : 574 - 584
  • [37] Explainable Artificial Intelligence for Cybersecurity
    Sharma, Deepak Kumar
    Mishra, Jahanavi
    Singh, Aeshit
    Govil, Raghav
    Srivastava, Gautam
    Lin, Jerry Chun-Wei
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 103
  • [38] Explainable Artificial Intelligence: A Survey
    Dosilovic, Filip Karlo
    Brcic, Mario
    Hlupic, Nikica
    2018 41ST INTERNATIONAL CONVENTION ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONICS AND MICROELECTRONICS (MIPRO), 2018, : 210 - 215
  • [39] Spatial flood susceptibility mapping using an explainable artificial intelligence (XAI) model
    Pradhan, Biswajeet
    Lee, Saro
    Dikshit, Abhirup
    Kim, Hyesu
    GEOSCIENCE FRONTIERS, 2023, 14 (06)
  • [40] Performance Evaluation of GIS-Based Artificial Intelligence Approaches for Landslide Susceptibility Modeling and Spatial Patterns Analysis
    Lei, Xinxiang
    Chen, Wei
    Binh Thai Pham
    ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION, 2020, 9 (07)