Local Consensus Enhanced Siamese Network with Reciprocal Loss for Two-view Correspondence Learning

被引:1
|
作者
Wang, Linbo [1 ]
Wu, Jing [1 ]
Fang, Xianyong [1 ]
Liu, Zhengyi [1 ]
Cao, Chenjie [2 ]
Fu, Yanwei [2 ,3 ,4 ]
机构
[1] Anhui Univ, Sch Comp Sci & Technol, Hefei, Peoples R China
[2] Fudan Univ, Sch Data Sci, Shanghai, Peoples R China
[3] Fudan Univ, Shanghai Key Lab Intelligent Informat Proc, Shanghai, Peoples R China
[4] Zhejiang Normal Univ, Fudan ISTBI ZJNU Algorithm Ctr Braininspired Inte, Jinhua, Zhejiang, Peoples R China
关键词
Siamese Network; Feature Consensus; Two-view Correspondences;
D O I
10.1145/3581783.3612458
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent studies of two-view correspondence learning usually establish an end-to-end network to jointly predict correspondence reliability and relative pose. We improve such a framework from two aspects. First, we propose a Local Feature Consensus (LFC) plugin block to augment the features of existing models. Given a correspondence feature, the block augments its neighboring features with mutual neighborhood consensus and aggregates them to produce an enhanced feature. As inliers obey a uniform cross-view transformation and share more consistent learned features than outliers, feature consensus strengthens inlier correlation and suppresses outlier distraction, which makes output features more discriminative for classifying inliers/outliers. Second, existing approaches supervise network training with the ground truth correspondences and essential matrix projecting one image to the other for an input image pair, without considering the information from the reverse mapping. We extend existing models to a Siamese network with a reciprocal loss that exploits the supervision of mutual projection, which considerably promotes the matching performance without introducing additional model parameters. Building upon MSA-Net [30], we implement the two proposals and experimentally achieve state-of-the-art performance on benchmark datasets.
引用
收藏
页码:5235 / 5243
页数:9
相关论文
共 50 条
  • [1] Two-View Correspondence Learning With Local Consensus Transformer
    Wang, Gang
    Chen, Yufei
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [2] Two-view correspondence learning using graph neural network with reciprocal neighbor attention
    Li, Zizhuo
    Ma, Yong
    Mei, Xiaoguang
    Ma, Jiayi
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2023, 202 : 114 - 124
  • [3] BCLNet: Bilateral Consensus Learning for Two-View Correspondence Pruning
    Miao, Xiangyang
    Xiao, Guobao
    Wang, Shiping
    Yu, Jun
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 5, 2024, : 4225 - 4232
  • [4] ConvMatch: Rethinking Network Design for Two-View Correspondence Learning
    Zhang, Shihua
    Ma, Jiayi
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 3, 2023, : 3472 - 3479
  • [5] ConvMatch: Rethinking Network Design for Two-View Correspondence Learning
    Zhang, Shihua
    Ma, Jiayi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 2920 - 2935
  • [6] PRNet: Parallel Reinforcement Network for two-view correspondence learning
    Kang, Zheng
    Lai, Taotao
    Li, Zuoyong
    Wei, Lifang
    Chen, Riqing
    KNOWLEDGE-BASED SYSTEMS, 2025, 310
  • [7] RANet: A relation-aware network for two-view correspondence learning
    Lin, Guorong
    Liu, Xin
    Lin, Fangfang
    Xiao, Guobao
    Ma, Jiayi
    NEUROCOMPUTING, 2022, 488 : 547 - 556
  • [8] Correspondence Attention Transformer: A Context-Sensitive Network for Two-View Correspondence Learning
    Ma, Jiayi
    Wang, Yang
    Fan, Aoxiang
    Xiao, Guobao
    Chen, Riqing
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 3509 - 3524
  • [9] Core sample consensus method for two-view correspondence matching
    Ding, Xintao
    Li, Boquan
    Zhou, Wen
    Zhao, Cheng
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (8) : 24609 - 24630
  • [10] Core sample consensus method for two-view correspondence matching
    Xintao Ding
    Boquan Li
    Wen Zhou
    Cheng Zhao
    Multimedia Tools and Applications, 2024, 83 : 24609 - 24630