Prior-Experience-Based Vision-Language Model for Remote Sensing Image-Text Retrieval

被引:0
|
作者
Tang, Xu [1 ]
Huang, Dabiao [1 ]
Ma, Jingjing [1 ]
Zhang, Xiangrong [1 ]
Liu, Fang [2 ]
Jiao, Licheng [1 ]
机构
[1] Xidian Univ, Minist Educ, Sch Artificial Intelligence, Key Lab Intelligent Percept & Image Understanding, Xian 710071, Peoples R China
[2] Nanjing Univ Sci & Technol, Minist Educ, Sch Comp Sci & Engn, Key Lab Intelligent Percept & Systemsfor High Dime, Nanjing 210094, Peoples R China
基金
中国国家自然科学基金;
关键词
Visualization; Feature extraction; Transformers; Semantics; Training; Convolutional neural networks; Recurrent neural networks; Learning from prior experiences (LPEs); multiscale feature fusion; remote sensing image-text retrieval (RSITR); transformer; BIG DATA; FUSION;
D O I
10.1109/TGRS.2024.3464468
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Remote sensing (RS) image-text retrieval (RSITR) aims to retrieve relevant texts (RS images) based on the content of a given RS image (text). Existing methods are used to employing the convolutional neural network (CNN) and recurrent neural network (RNN) as encoders to learn visual and textual features for retrieval. Although feasible, the global information hidden in different data does not receive the attention it deserves. To mitigate this problem, transformers have been introduced. Nevertheless, the complexity of RS images present challenges in directly introducing Transformer-based architectures to multimodal learning in RS scenes, particularly in visual feature extraction and cross-modal interaction. In addition, the textual captions are always simpler than the complex RS images, leading to a semantic description appearing in different images. This typical false-negative (FN) sample problem increases the difficulty of RSITR tasks. To address the above limitations, we propose a new RSITR model named prior-experience-based RS vision-language (PERSVL). First, the specific visual and text encoders are used to extract features from RS images and texts. Also, a high-level feature complement (HFC) module is developed based on the self-attention mechanism (SAM) for the visual encoder to explore the complex contents from RS images fully. Second, a dual-branch multimodal fusion encoder (DBMFE) is designed to complete the cross-modal learning. It comprises a dual-branch multimodal interaction (DBMI) module and a branch fusion module. DBMI is designed to fully explore the relationships between different modalities, enriching visual and textual features. The branch fusion module integrates the cross-modal features and utilizes a classification head to generate matching scores for retrieval. Finally, a learning from prior experiences (LPEs) module is designed to reduce the influence of FN samples by analyzing the historical data produced in the model training process. Experiments are conducted on three popular datasets, and the positive results show that our PERSVL model achieves superior performance compared with previous methods. By integrating the advantages of natural language and RS images, our PERSVL can be applied in various applications, such as environmental monitoring, disaster evaluation, and urban planning. Our source codes are available at: https://github.com/TangXu-Group/Cross-modal-remote-sensing-image-and-text-retrieval-models/tree/main/PERSVL.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] A Deep Semantic Alignment Network for the Cross-Modal Image-Text Retrieval in Remote Sensing
    Cheng, Qimin
    Zhou, Yuzhuo
    Fu, Peng
    Xu, Yuan
    Zhang, Liang
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2021, 14 : 4284 - 4297
  • [22] Toward Efficient and Accurate Remote Sensing Image-Text Retrieval With a Coarse-to-Fine Approach
    Zhou, Wenqian
    Wu, Hanlin
    Deng, Pei
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2025, 22
  • [23] Strong and Weak Prompt Engineering for Remote Sensing Image-Text Cross-Modal Retrieval
    Sun, Tianci
    Zheng, Chengyu
    Li, Xiu
    Nie, Jie
    Gao, Yanli
    Huang, Lei
    Wei, Zhiqiang
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2025, 18 : 6968 - 6980
  • [24] ChatEarthNet: a global-scale image-text dataset empowering vision-language geo-foundation models
    Yuan, Zhenghang
    Xiong, Zhitong
    Mou, Lichao
    Zhu, Xiao Xiang
    EARTH SYSTEM SCIENCE DATA, 2025, 17 (03) : 1245 - 1263
  • [25] Bootstrapping Interactive Image-Text Alignment for Remote Sensing Image Captioning
    Yang, Cong
    Li, Zuchao
    Zhang, Lefei
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 12
  • [26] A TEXTURE AND SALIENCY ENHANCED IMAGE LEARNING METHOD FOR CROSS-MODAL REMOTE SENSING IMAGE-TEXT RETRIEVAL
    Yang, Rui
    Zhang, Di
    Guo, YanHe
    Wang, Shuang
    IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 4895 - 4898
  • [27] LuoJiaHOG: A hierarchy oriented geo-aware image caption dataset for remote sensing image-text retrieval
    Zhao, Yuanxin
    Zhang, Mi
    Yang, Bingnan
    Zhang, Zhan
    Kang, Jujia
    Gong, Jianya
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2025, 222 : 130 - 151
  • [28] VLCA: vision-language aligning model with cross-modal attention for bilingual remote sensing image captioning
    Wei, Tingting
    Yuan, Weilin
    Luo, Junren
    Zhang, Wanpeng
    Lu, Lina
    JOURNAL OF SYSTEMS ENGINEERING AND ELECTRONICS, 2023, 34 (01) : 9 - 18
  • [29] VLCA: vision-language aligning model with cross-modal attention for bilingual remote sensing image captioning
    WEI Tingting
    YUAN Weilin
    LUO Junren
    ZHANG Wanpeng
    LU Lina
    JournalofSystemsEngineeringandElectronics, 2023, 34 (01) : 9 - 18
  • [30] Masking-Based Cross-Modal Remote Sensing Image-Text Retrieval via Dynamic Contrastive Learning
    Zhao, Zuopeng
    Miao, Xiaoran
    He, Chen
    Hu, Jianfeng
    Min, Bingbing
    Gao, Yumeng
    Liu, Ying
    Pharksuwan, Kanyaphakphachsorn
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 15