TransMatch: Transformer-based correspondence pruning via local and global consensus

被引:0
|
作者
Liu, Yizhang [1 ,2 ]
Li, Yanping [3 ]
Zhao, Shengjie [1 ,4 ]
机构
[1] Tongji Univ, Sch Software Engn, Shanghai 201804, Peoples R China
[2] Fuzhou Univ, Coll Comp & Data Sci, Fuzhou, Peoples R China
[3] Tongji Univ, Dept Comp Sci & Technol, Shanghai 201804, Peoples R China
[4] Minist Educ, Engn Res Ctr, Key Software Technol Smart City Percept & Planning, Shanghai 200003, Peoples R China
基金
中国国家自然科学基金;
关键词
Correspondence pruning; Transformer; Local and global consensus; Camera pose estimation;
D O I
10.1016/j.patcog.2024.111120
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Correspondence pruning aims to filter out false correspondences (a.k.a. outliers) from the initial feature correspondence set, which is pivotal to matching-based vision tasks, such as image registration. To solve this problem, most existing learning-based methods typically use a multilayer perceptron framework and several well-designed modules to capture local and global contexts. However, few studies have explored how local and global consensuses interact to form cohesive feature representations. This paper proposes a novel framework called TransMatch, which leverages the full power of Transformer structure to extract richer features and facilitate progressive local and global consensus learning. In addition to enhancing feature learning, Transformer is used as a powerful tool to connect the above two consensuses. Benefiting from Transformer, our TransMatch is surprisingly effective for differentiating correspondences. Experimental results on correspondence pruning and camera pose estimation demonstrate that the proposed TransMatch outperforms other state-of-the-art methods by a large margin. The code will be available at https://github. com/lyz8023lyp/TransMatch/.
引用
收藏
页数:9
相关论文
共 50 条
  • [31] A transformer-based convolutional local attention (ConvLoA) method for temporal action localization
    Artham, Sainithin
    Shaikh, Soharab Hossain
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024,
  • [32] DeepMatcher: A deep transformer-based network for robust and accurate local feature matching
    Xie, Tao
    Dai, Kun
    Wang, Ke
    Li, Ruifeng
    Zhao, Lijun
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 237
  • [33] Transformer-Based Global PointPillars 3D Object Detection Method
    Zhang, Lin
    Meng, Hua
    Yan, Yunbing
    Xu, Xiaowei
    ELECTRONICS, 2023, 12 (14)
  • [34] Dual Network Structure With Interweaved Global-Local Feature Hierarchy for Transformer-Based Object Detection in Remote Sensing Image
    Xue, Jingqian
    He, Da
    Liu, Mengwei
    Shi, Qian
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2022, 15 : 6856 - 6866
  • [35] Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning
    Li, Bingbing
    Kong, Zhenglun
    Zhang, Tianyun
    Li, Ji
    Li, Zhengang
    Liu, Hang
    Ding, Caiwen
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020,
  • [36] Efficient Transformer-Based Compressed Video Modeling via Informative Patch Selection
    Suzuki, Tomoyuki
    Aoki, Yoshimitsu
    SENSORS, 2023, 23 (01)
  • [37] OPTICAL SATELLITE IMAGE CHANGE DETECTION VIA TRANSFORMER-BASED SIAMESE NETWORK
    Wu, Yang
    Wang, Yuyao
    Li, Yanheng
    Xu, Qizhi
    2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2022), 2022, : 1436 - 1439
  • [38] Exploring Visual Relationships via Transformer-based Graphs for Enhanced Image Captioning
    Li, Jingyu
    Mao, Zhendong
    Li, Hao
    Chen, Weidong
    Zhang, Yongdong
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (05)
  • [39] Knowledge-Enhanced Conversational Recommendation via Transformer-Based Sequential Modeling
    Zou, Jie
    Sun, Aixin
    Long, Cheng
    Kanoulas, Evangelos
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2024, 42 (06)
  • [40] Improving Object Grasp Performance via Transformer-Based Sparse Shape Completion
    Wenkai Chen
    Hongzhuo Liang
    Zhaopeng Chen
    Fuchun Sun
    Jianwei Zhang
    Journal of Intelligent & Robotic Systems, 2022, 104