A New Approach to Spatial Landslide Susceptibility Prediction in Karst Mining Areas Based on Explainable Artificial Intelligence

被引:21
|
作者
Fang, Haoran [1 ,2 ,3 ]
Shao, Yun [1 ,2 ,3 ]
Xie, Chou [1 ,2 ,3 ]
Tian, Bangsen [1 ]
Shen, Chaoyong [4 ]
Zhu, Yu [1 ]
Guo, Yihong [1 ]
Yang, Ying [1 ,2 ]
Chen, Guanwen [4 ]
Zhang, Ming [1 ,2 ]
机构
[1] Univ Chinese Acad Sci, Aerosp Informat Res Inst, Beijing 100094, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[3] Deqing Acad Satellite Applicat, Lab Target Microwave Properties, Huzhou 313200, Peoples R China
[4] Third Surveying & Mapping Inst Guizhou Prov, Guiyang 550004, Peoples R China
关键词
landslides susceptibility map; explainable AI; GIS; Karst landform; coal mining; RANDOM FOREST; ALGORITHMS; REGRESSION; TREE;
D O I
10.3390/su15043094
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Landslides are a common and costly geological hazard, with regular occurrences leading to significant damage and losses. To effectively manage land use and reduce the risk of landslides, it is crucial to conduct susceptibility assessments. To date, many machine-learning methods have been applied to the landslide susceptibility map (LSM). However, as a risk prediction, landslide susceptibility without good interpretability would be a risky approach to apply these methods to real life. This study aimed to assess the LSM in the region of Nayong in Guizhou, China, and conduct a comprehensive assessment and evaluation of landslide susceptibility maps utilizing an explainable artificial intelligence. This study incorporates remote sensing data, field surveys, geographic information system techniques, and interpretable machine-learning techniques to analyze the sensitivity to landslides and to contrast it with other conventional models. As an interpretable machine-learning method, generalized additive models with structured interactions (GAMI-net) could be used to understand how LSM models make decisions. The results showed that the GAMI-net model was valid and had an area under curve (AUC) value of 0.91 on the receiver operating characteristic (ROC) curve, which is better than the values of 0.85 and 0.81 for the random forest and SVM models, respectively. The coal mining, rock desertification, and rainfall greater than 1300 mm were more susceptible to landslides in the study area. Additionally, the pairwise interaction factors, such as rainfall and mining, lithology and rainfall, and rainfall and elevation, also increased the landslide susceptibility. The results showed that interpretable models could accurately predict landslide susceptibility and reveal the causes of landslide occurrence. The GAMI-net-based model exhibited good predictive capability and significantly increased model interpretability to inform landslide management and decision making, which suggests its great potential for application in LSM.
引用
收藏
页数:22
相关论文
共 50 条
  • [41] Mixing Approach for Text Data Augmentation Based on an Ensemble of Explainable Artificial Intelligence Methods
    Jinyi Yu
    Jinhae Choi
    Younghoon Lee
    Neural Processing Letters, 2023, 55 : 1741 - 1757
  • [42] Explainable artificial intelligence for intrusion detection in IoT networks: A deep learning based approach
    Sharma, Bhawana
    Sharma, Lokesh
    Lal, Chhagan
    Roy, Satyabrata
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238
  • [43] An explainable Artificial Intelligence approach for delineating sex-based profiles in severe asthma
    Catalisano, Alessia
    Marzi, Chiara
    Allegrini, Chiara
    Bentivegna, Elisa
    Bracciali, Alberto
    Insalata, Greta
    Marinato, Martina Maria
    Diciotti, Stefano
    Baccini, Michela
    Camiciottoli, Gianna
    EUROPEAN RESPIRATORY JOURNAL, 2024, 64
  • [44] Mixing Approach for Text Data Augmentation Based on an Ensemble of Explainable Artificial Intelligence Methods
    Yu, Jinyi
    Choi, Jinhae
    Lee, Younghoon
    NEURAL PROCESSING LETTERS, 2023, 55 (02) : 1741 - 1757
  • [45] An explainable artificial-intelligence-based approach to investigating factors that influence the citation of papers
    Ha, Taehyun
    TECHNOLOGICAL FORECASTING AND SOCIAL CHANGE, 2022, 184
  • [46] Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach
    Lamy, Jean-Baptiste
    Sekar, Boomadevi
    Guezennec, Gilles
    Bouaud, Jacques
    Seroussi, Brigitte
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2019, 94 : 42 - 53
  • [47] INVESTIGATING EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR MRI-BASED CLASSIFICATION OF DEMENTIA: A NEW STABILITY CRITERION FOR EXPLAINABLE METHODS
    Salih, Ahmed
    Galazzo, Ilaria Boscolo
    Cruciani, Federica
    Brusini, Lorenza
    Radeva, Petia
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 4003 - 4007
  • [48] Improving Landslide Susceptibility Prediction in Uttarakhand through Hyper-Tuned Artificial Intelligence and Global Sensitivity Analysis
    Rihan, Mohd
    Talukdar, Swapan
    Naikoo, Mohd Waseem
    Ahmed, Rayees
    Shahfahad, Atiqur
    Rahman, Atiqur
    EARTH SYSTEMS AND ENVIRONMENT, 2024,
  • [49] A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease
    Shaker El-Sappagh
    Jose M. Alonso
    S. M. Riazul Islam
    Ahmad M. Sultan
    Kyung Sup Kwak
    Scientific Reports, 11
  • [50] Explainable Artificial Intelligence Approach for the Early Prediction of Ventilator Support and Mortality in COVID-19 Patients
    Aslam, Nida
    COMPUTATION, 2022, 10 (03)