AUTL: An Attention U-Net Transfer Learning Inversion Framework for Magnetotelluric Data

被引:1
|
作者
Gao, Ci [1 ]
Li, Yabin [1 ]
Wang, Xueqiu [1 ]
机构
[1] Jilin Univ, Coll Geoexplorat Sci & Technol, Changchun 130026, Peoples R China
基金
中国国家自然科学基金;
关键词
Attention U-Net; geophysics; inversion; magnetotelluric (MT); transfer learning (TL); OCCAMS INVERSION;
D O I
10.1109/LGRS.2024.3471623
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Given the limited number of labeled magnetotelluric (MT) filed data samples, current neural network (NN) inversions for MT are primarily rely on synthetic data, which may not fully capture the complexity of true underground resistivity structures. This letter introduces a novel inversion scheme that combines attention U-Net with transfer learning (AUTL) to bridge this gap. The proposed method improves inversion accuracy by integrating field data into the training process through transfer learning (TL), using 3-D models derived from real field measurements as the target dataset. This enhances the authenticity and reliability of the inversion results. Additionally, the incorporation of attention gates (AGs) significantly improves feature extraction by focusing on relevant features. We validate the effectiveness of the AUTL approach using both synthetic and measured data, demonstrating its superior performance in reconstructing underground resistivity structures with high accuracy and noise resistance. This method offers a promising solution for training inversion networks with limited field MT data and lays the groundwork for expanding datasets with more diverse 3-D models in the future.
引用
收藏
页数:5
相关论文
共 50 条
  • [31] Data-Driven Ringed Residual U-Net Scheme for Full Waveform Inversion
    Huang, Xingguo
    Wang, Cong
    Ye, Wenrui
    Greenhalgh, Stewart
    Li, Yue
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [32] Underwater U-Net: Deep Learning with U-Net for Visual Underwater Moving Object detection
    Bajpai, Vatsalya
    Sharma, Akhilesh
    Subudhi, Badri Narayan
    Veerakumar, T.
    Jakhetiya, Vinit
    OCEANS 2021: SAN DIEGO - PORTO, 2021,
  • [33] Attention-augmented U-Net (AA-U-Net) for semantic segmentation
    Rajamani, Kumar T.
    Rani, Priya
    Siebert, Hanna
    ElagiriRamalingam, Rajkumar
    Heinrich, Mattias P.
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (04) : 981 - 989
  • [34] Segmentation of Mammogram Images Using U-Net with Fusion of Channel and Spatial Attention Modules (U-Net CASAM)
    Robert Singh, A.
    Vidya, S.
    Hariharasitaraman, S.
    Athisayamani, Suganya
    Hsu, Fang Rong
    Lecture Notes in Networks and Systems, 2024, 966 LNNS : 435 - 448
  • [35] 3-D Gravity Intelligent Inversion by U-Net Network With Data Augmentation
    Zhou, Xinyi
    Chen, Zhaoxi
    Lv, Yandong
    Wang, Shuai
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [36] Deep Learning with Limited Data: Organ Segmentation Performance by U-Net
    Bardis, Michelle
    Houshyar, Roozbeh
    Chantaduly, Chanon
    Ushinsky, Alexander
    Glavis-Bloom, Justin
    Shaver, Madeleine
    Chow, Daniel
    Uchio, Edward
    Chang, Peter
    ELECTRONICS, 2020, 9 (08) : 1 - 12
  • [37] Deep reinforcement learning based on transformer and U-Net framework for stock trading
    Yang, Bing
    Liang, Ting
    Xiong, Jian
    Zhong, Chong
    KNOWLEDGE-BASED SYSTEMS, 2023, 262
  • [38] Attention guided U-Net for accurate iris segmentation
    Lian, Sheng
    Luo, Zhiming
    Zhong, Zhun
    Lin, Xiang
    Su, Songzhi
    Li, Shaozi
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2018, 56 : 296 - 304
  • [39] Multiscale Attention U-Net for Skin Lesion Segmentation
    Alahmadi, Mohammad D.
    IEEE ACCESS, 2022, 10 : 59145 - 59154
  • [40] Virtual try-on based on attention U-Net
    Hu, Xinrong
    Zhang, Junyu
    Huang, Jin
    Liang, JinXing
    Yu, Feng
    Peng, Tao
    VISUAL COMPUTER, 2022, 38 (9-10): : 3365 - 3376