Cross-modal hashing retrieval with compatible triplet representation

被引:0
|
作者
Hao, Zhifeng [1 ]
Jin, Yaochu [2 ]
Yan, Xueming [1 ,3 ]
Wang, Chuyue [3 ]
Yang, Shangshang [4 ]
Ge, Hong [5 ]
机构
[1] Shantou Univ, Key Lab Intelligent Mfg Technol, Shantou 515063, Guangdong, Peoples R China
[2] Westlake Univ, Sch Engn, Hangzhou 310030, Peoples R China
[3] Guangdong Univ Foreign Studies, Sch Informat Sci & Technol, Guangzhou 510006, Peoples R China
[4] Anhui Univ, Sch Artificial Intelligence, Hefei 230601, Peoples R China
[5] South China Normal Univ, Sch Comp Sci, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Cross-modal hashing retrieval; Compatible triplet; Label network; Fusion attention;
D O I
10.1016/j.neucom.2024.128293
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-modal hashing retrieval has emerged as a promising approach due to its advantages in storage efficiency and query speed for handling diverse multimodal data. However, existing cross-modal hashing retrieval methods often oversimplify similarity by solely considering identical labels across modalities and are sensitive to noise in the original multimodal data. To tackle this challenge, we propose a cross-modal hashing retrieval approach with compatible triplet representation. In the proposed approach, we integrate the essential feature representations and semantic information from text and images into their corresponding multi-label feature representations, and introduce a fusion attention module to extract text and image modalities with channel and spatial attention features, respectively, thereby enhancing compatible triplet-based semantic information in cross-modal hashing learning. Comprehensive experiments demonstrate the superiority of the proposed approach in retrieval accuracy compared to state-of-the-art methods on three public datasets.
引用
收藏
页数:9
相关论文
共 50 条
  • [41] Asymmetric Correlation Quantization Hashing for Cross-Modal Retrieval
    Wang, Lu
    Zareapoor, Masoumeh
    Yang, Jie
    Zheng, Zhonglong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 3665 - 3678
  • [42] Discriminative correlation hashing for supervised cross-modal retrieval
    Lu, Xu
    Zhang, Huaxiang
    Sun, Jiande
    Wang, Zhenhua
    Guo, Peilian
    Wan, Wenbo
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2018, 65 : 221 - 230
  • [43] Linear Subspace Ranking Hashing for Cross-Modal Retrieval
    Li, Kai
    Qi, Guo-Jun
    Ye, Jun
    Hua, Kien A.
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (09) : 1825 - 1838
  • [44] Discrete Robust Supervised Hashing for Cross-Modal Retrieval
    Yao, Tao
    Zhang, Zhiwang
    Yan, Lianshan
    Yue, Jun
    Tian, Qi
    IEEE ACCESS, 2019, 7 : 39806 - 39814
  • [45] Deep Multiscale Fusion Hashing for Cross-Modal Retrieval
    Nie, Xiushan
    Wang, Bowei
    Li, Jiajia
    Hao, Fanchang
    Jian, Muwei
    Yin, Yilong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (01) : 401 - 410
  • [46] Separated Variational Hashing Networks for Cross-Modal Retrieval
    Hu, Peng
    Wang, Xu
    Zhen, Liangli
    Peng, Dezhong
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 1721 - 1729
  • [47] Deep Discrete Cross-Modal Hashing for Cross-Media Retrieval
    Zhong, Fangming
    Chen, Zhikui
    Min, Geyong
    PATTERN RECOGNITION, 2018, 83 : 64 - 77
  • [48] Cross-Domain Transfer Hashing for Efficient Cross-Modal Retrieval
    Li, Fengling
    Wang, Bowen
    Zhu, Lei
    Li, Jingjing
    Zhang, Zheng
    Chang, Xiaojun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (10) : 9664 - 9677
  • [49] Hybrid representation learning for cross-modal retrieval
    Cao, Wenming
    Lin, Qiubin
    He, Zhihai
    He, Zhiquan
    NEUROCOMPUTING, 2019, 345 : 45 - 57
  • [50] Privacy-Enhanced Prototype-Based Federated Cross-Modal Hashing for Cross-Modal Retrieval
    Zuo, Ruifan
    Zheng, Chaoqun
    Li, Fengling
    Zhu, Lei
    Zhang, Zheng
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (09)