Similarity Graph-correlation Reconstruction Network for unsupervised cross-modal hashing

被引:16
|
作者
Yao, Dan [1 ,2 ]
Li, Zhixin [1 ,2 ]
Li, Bo [1 ,2 ]
Zhang, Canlong [1 ,2 ]
Ma, Huifang [3 ]
机构
[1] Guangxi Normal Univ, Key Lab Educ Blockchain & Intelligent Technol, Minist Educ, Guilin 541004, Peoples R China
[2] Guangxi Normal Univ, Guangxi Key Lab Multisource Informat Min & Secur, Guilin 541004, Peoples R China
[3] Northwest Normal Univ, Coll Comp Sci & Engn, Lanzhou 730070, Peoples R China
基金
中国国家自然科学基金;
关键词
Cross-modal retrieval; Unsupervised cross-modal hashing; Similarity matrix; Graph rebasing; Similarity reconstruction;
D O I
10.1016/j.eswa.2023.121516
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing cross-modal hash retrieval methods can simultaneously enhance retrieval speed and reduce storage space. However, these methods face a major challenge in determining the similarity metric between two modalities. Specifically, the accuracy of intra-modal and inter-modal similarity measurements is inadequate, and the large gap between modalities leads to semantic bias. In this paper, we propose a Similarity Graph-correlation Reconstruction Network (SGRN) for unsupervised cross-modal hashing. Particularly, the local relation graph rebasing module is used to filter out graph nodes with weak similarity and associate graph nodes with strong similarity, resulting in fine-grained intra-modal similarity relation graphs. The global relation graph reconstruction module is further strengthens cross-modal correlation and implements fine-grained similarity alignment between modalities. In addition, in order to bridge the modal gap, we combine the similarity representation of real-valued and hash features to design the intra-modal and inter-modal training strategies. SGRN conducted extensive experiments on two cross-modal retrieval datasets, and the experimental results effectively validated the superiority of the proposed method and significantly improved the retrieval performance.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Unsupervised Cross-Modal Hashing with Soft Constraint
    Zhou, Yuxuan
    Li, Yaoxian
    Liu, Rui
    Hao, Lingyun
    Sun, Yuanliang
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2017, PT II, 2018, 10736 : 756 - 765
  • [22] Aggregation-Based Graph Convolutional Hashing for Unsupervised Cross-Modal Retrieval
    Zhang, Peng-Fei
    Li, Yang
    Huang, Zi
    Xu, Xin-Shun
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 466 - 479
  • [23] Set and Rebase: Determining the Semantic Graph Connectivity for Unsupervised Cross-Modal Hashing
    Wang, Weiwei
    Shen, Yuming
    Zhang, Haofeng
    Yao, Yazhou
    Liu, Li
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 853 - 859
  • [24] Gaussian similarity preserving for cross-modal hashing
    Lin, Liuyin
    Shu, Xin
    NEUROCOMPUTING, 2022, 494 : 446 - 454
  • [25] Deep noise mitigation and semantic reconstruction hashing for unsupervised cross-modal retrieval
    Cheng Zhang
    Yuan Wan
    Haopeng Qiang
    Neural Computing and Applications, 2024, 36 : 5383 - 5397
  • [26] Deep noise mitigation and semantic reconstruction hashing for unsupervised cross-modal retrieval
    Zhang, Cheng
    Wan, Yuan
    Qiang, Haopeng
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (10): : 5383 - 5397
  • [27] Deep Semantic-Preserving Reconstruction Hashing for Unsupervised Cross-Modal Retrieval
    Cheng, Shuli
    Wang, Liejun
    Du, Anyu
    ENTROPY, 2020, 22 (11) : 1 - 22
  • [28] Dual-matrix guided reconstruction hashing for unsupervised cross-modal retrieval
    Lin, Ziyong
    Jiang, Xiaolong
    Zhang, Jie
    Li, Mingyong
    INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2025, 14 (01)
  • [29] Unsupervised Multi-modal Hashing for Cross-Modal Retrieval
    Yu, Jun
    Wu, Xiao-Jun
    Zhang, Donglin
    COGNITIVE COMPUTATION, 2022, 14 (03) : 1159 - 1171
  • [30] Unsupervised Multi-modal Hashing for Cross-Modal Retrieval
    Jun Yu
    Xiao-Jun Wu
    Donglin Zhang
    Cognitive Computation, 2022, 14 : 1159 - 1171