A New Approach to Spatial Landslide Susceptibility Prediction in Karst Mining Areas Based on Explainable Artificial Intelligence

被引:21
|
作者
Fang, Haoran [1 ,2 ,3 ]
Shao, Yun [1 ,2 ,3 ]
Xie, Chou [1 ,2 ,3 ]
Tian, Bangsen [1 ]
Shen, Chaoyong [4 ]
Zhu, Yu [1 ]
Guo, Yihong [1 ]
Yang, Ying [1 ,2 ]
Chen, Guanwen [4 ]
Zhang, Ming [1 ,2 ]
机构
[1] Univ Chinese Acad Sci, Aerosp Informat Res Inst, Beijing 100094, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[3] Deqing Acad Satellite Applicat, Lab Target Microwave Properties, Huzhou 313200, Peoples R China
[4] Third Surveying & Mapping Inst Guizhou Prov, Guiyang 550004, Peoples R China
关键词
landslides susceptibility map; explainable AI; GIS; Karst landform; coal mining; RANDOM FOREST; ALGORITHMS; REGRESSION; TREE;
D O I
10.3390/su15043094
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Landslides are a common and costly geological hazard, with regular occurrences leading to significant damage and losses. To effectively manage land use and reduce the risk of landslides, it is crucial to conduct susceptibility assessments. To date, many machine-learning methods have been applied to the landslide susceptibility map (LSM). However, as a risk prediction, landslide susceptibility without good interpretability would be a risky approach to apply these methods to real life. This study aimed to assess the LSM in the region of Nayong in Guizhou, China, and conduct a comprehensive assessment and evaluation of landslide susceptibility maps utilizing an explainable artificial intelligence. This study incorporates remote sensing data, field surveys, geographic information system techniques, and interpretable machine-learning techniques to analyze the sensitivity to landslides and to contrast it with other conventional models. As an interpretable machine-learning method, generalized additive models with structured interactions (GAMI-net) could be used to understand how LSM models make decisions. The results showed that the GAMI-net model was valid and had an area under curve (AUC) value of 0.91 on the receiver operating characteristic (ROC) curve, which is better than the values of 0.85 and 0.81 for the random forest and SVM models, respectively. The coal mining, rock desertification, and rainfall greater than 1300 mm were more susceptible to landslides in the study area. Additionally, the pairwise interaction factors, such as rainfall and mining, lithology and rainfall, and rainfall and elevation, also increased the landslide susceptibility. The results showed that interpretable models could accurately predict landslide susceptibility and reveal the causes of landslide occurrence. The GAMI-net-based model exhibited good predictive capability and significantly increased model interpretability to inform landslide management and decision making, which suggests its great potential for application in LSM.
引用
收藏
页数:22
相关论文
共 50 条
  • [21] Explainable artificial intelligence (XAI) for interpreting the contributing factors feed into the wildfire susceptibility prediction model
    Abdollahi, Abolfazl
    Pradhan, Biswajeet
    SCIENCE OF THE TOTAL ENVIRONMENT, 2023, 879
  • [22] Global gene network exploration based on explainable artificial intelligence approach
    Park, Heewon
    Maruhashi, Koji
    Yamaguchi, Rui
    Imoto, Seiya
    Miyano, Satoru
    PLOS ONE, 2020, 15 (11):
  • [23] Neural Network Based Prediction of Terrorist Attacks Using Explainable Artificial Intelligence
    Rosner, Anna
    Gegov, Alexander
    Ouelhadj, Djamila
    Hopgood, Adrian
    Da Deppo, Serge
    2023 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI, 2023, : 189 - 190
  • [24] Salmon Consumption Behavior Prediction Based on Bayesian Optimization and Explainable Artificial Intelligence
    Wu, Zhan
    Cha, Sina
    Wang, Chunxiao
    Qu, Tinghong
    Zou, Zongfeng
    FOODS, 2025, 14 (03)
  • [25] Explainable Artificial Intelligence Based Framework for Non-Communicable Diseases Prediction
    Davagdorj, Khishigsuren
    Bae, Jang-Whan
    Pham, Van-Huy
    Theera-Umpon, Nipon
    Ryu, Keun Ho
    IEEE ACCESS, 2021, 9 : 123672 - 123688
  • [26] Exploring the Efficacy of Artificial Intelligence in Speed Prediction: Explainable Machine-Learning Approach
    Jain, Vineet
    Chouhan, Rajesh
    Dhamaniya, Ashish
    JOURNAL OF COMPUTING IN CIVIL ENGINEERING, 2025, 39 (02)
  • [27] A two-stage explainable artificial intelligence approach for classification-based job cycle time prediction
    Toly Chen
    Yu-Cheng Wang
    The International Journal of Advanced Manufacturing Technology, 2022, 123 : 2031 - 2042
  • [28] Assessment of noise pollution-prone areas using an explainable geospatial artificial intelligence approach
    Razavi-Termeh, Seyed Vahid
    Sadeghi-Niaraki, Abolghasem
    Yao, X. Angela
    Naqvi, Rizwan Ali
    Choi, Soo-Mi
    JOURNAL OF ENVIRONMENTAL MANAGEMENT, 2024, 370
  • [29] A two-stage explainable artificial intelligence approach for classification-based job cycle time prediction
    Chen, Toly
    Wang, Yu-Cheng
    INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, 2022, 123 (5-6): : 2031 - 2042
  • [30] PRMxAI: protein arginine methylation sites prediction based on amino acid spatial distribution using explainable artificial intelligence
    Monika Khandelwal
    Ranjeet Kumar Rout
    BMC Bioinformatics, 24