Multi3Net: Segmenting Flooded Buildings via Fusion of Multiresolution, Multisensor, and Multitemporal Satellite Imagery

被引:0
|
作者
Rudner, Tim G. J. [1 ]
Russwurm, Marc [2 ]
Fil, Jakub [3 ]
Pelich, Ramona [4 ]
Bischke, Benjamin [5 ,6 ]
Kopackova, Veronika [7 ]
Bilinski, Piotr [1 ,8 ]
机构
[1] Univ Oxford, Oxford, England
[2] Tech Univ Munich, Munich, Germany
[3] Univ Kent, Canterbury, Kent, England
[4] Luxembourg Inst Sci & Technol, Luxembourg, Luxembourg
[5] Tech Univ Kaiserslautern, Kaiserslautern, Germany
[6] DFKI, Kaiserslautern, Germany
[7] Czech Geol Survey, Brno, Czech Republic
[8] Univ Warsaw, Warsaw, Poland
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a novel approach for rapid segmentation of flooded buildings by fusing multiresolution, multisensor, and multitemporal satellite imagery in a convolutional neural network. Our model significantly expedites the generation of satellite imagery-based flood maps, crucial for first responders and local authorities in the early stages of flood events. By incorporating multitemporal satellite imagery, our model allows for rapid and accurate post-disaster damage assessment and can be used by governments to better coordinate medium-and long-term financial assistance programs for affected areas. The network consists of multiple streams of encoder-decoder architectures that extract spatiotemporal information from medium-resolution images and spatial information from high-resolution images before fusing the resulting representations into a single medium-resolution segmentation map of flooded buildings. We compare our model to state-of-the-art methods for building footprint segmentation as well as to alternative fusion approaches for the segmentation of flooded buildings and find that our model performs best on both tasks. We also demonstrate that our model produces highly accurate segmentation maps of flooded buildings using only publicly available medium-resolution data instead of significantly more detailed but sparsely available very high-resolution data. We release the first open-source dataset of fully preprocessed and labeled multiresolution, multispectral, and multitemporal satellite images of disaster sites along with our source code.
引用
收藏
页码:702 / 709
页数:8
相关论文
共 6 条
  • [1] Multisensor and Multitemporal Fusion of VHR Satellite Imagery Based on KIM
    Molch, Katrin
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2010, 7 (01) : 48 - 52
  • [2] MSMDFF-NET: MULTI-SCALE FUSION CODER AND MULTI-DIRECTION COMBINED DECODER NETWORK FOR ROAD EXTRACTION FROM SATELLITE IMAGERY
    Wang, Yuchuan
    Tong, Ling
    Xiao, Fanghong
    Wen, Jiang
    Fan, Kunlong
    Zhu, Chenhui
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 5250 - 5253
  • [3] Fine-image Registration between Multi-sensor Satellite Images for Global Fusion Application of KOMPSAT-3<middle dot>3A Imagery
    Kim, Taeheon
    Yun, Yerin
    Lee, Changhui
    Han, Youkyung
    KOREAN JOURNAL OF REMOTE SENSING, 2022, 38 (06) : 1901 - 1910
  • [4] MAFF-Net: Enhancing 3D Object Detection With 4D Radar via Multi-Assist Feature Fusion
    Bi, Xin
    Weng, Caien
    Tong, Panpan
    Fan, Baojie
    Eichberge, Arno
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (05): : 4284 - 4291
  • [5] SEMANTIC 3D RECONSTRUCTION USING MULTI-VIEW HIGH-RESOLUTION SATELLITE IMAGES BASED ON U-NET AND IMAGE-GUIDED DEPTH FUSION
    Qin, Rongjun
    Huang, Xu
    Liu, Wei
    Xiao, Changlin
    2019 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2019), 2019, : 5057 - 5060
  • [6] Multi-Stage Feature Fusion of Multispectral and SAR Satellite Images for Seasonal Crop-Type Mapping at Regional Scale Using an Adapted 3D U-Net Model
    Wittstruck, Lucas
    Jarmer, Thomas
    Waske, Bjoern
    REMOTE SENSING, 2024, 16 (17)