Unsupervised Deep Cross-Modal Hashing by Knowledge Distillation for Large-scale Cross-modal Retrieval

被引:18
|
作者
Li, Mingyong [1 ,2 ]
Wang, Hongya [1 ,3 ]
机构
[1] Donghua Univ, Coll Comp Sci & Technol, Shanghai, Peoples R China
[2] Chongqing Normal Univ, Coll Comp & Informat Sci, Chongqing, Peoples R China
[3] Shanghai Key Lab Comp Software Evaluating & Testi, Shanghai, Peoples R China
关键词
cross-modal hashing; unsupervised learning; knowledge distillation; cross-modal retrieval;
D O I
10.1145/3460426.3463626
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-modal hashing (CMH) maps heterogeneous multiple modality data into compact binary code to achieve fast and flexible retrieval across different modalities, especially in large-scale retrieval. As the data don't need a lot of manual annotation, unsupervised cross-modal hashing has a wider application prospect than supervised method. However, the existing unsupervised methods are difficult to achieve satisfactory performance due to the lack of credible supervisory information. To solve this problem, inspired by knowledge distillation, we propose a novel unsupervised Knowledge Distillation Cross-Modal Hashing method (KDCMH), which can use similarity information distilled from unsupervised method to guide supervised method. Specifically, firstly, the teacher model adopted an unsupervised distribution-based similarity hashing method, which can construct a modal fusion similarity matrix.Secondly, under the supervision of teacher model distillation information, student model can generate more discriminative hash codes. In two public datasets NUS-WIDE and MIRFLICKR-25K, extensive experiments have proved the significant improvement of KDCMH on several representative unsupervised cross-modal hashing methods.
引用
收藏
页码:183 / 191
页数:9
相关论文
共 50 条
  • [21] DEEP SEMANTIC ADVERSARIAL HASHING BASED ON AUTOENCODER FOR LARGE-SCALE CROSS-MODAL RETRIEVAL
    Li, Mingyong
    Wang, Hongya
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (ICMEW), 2020,
  • [22] Dark knowledge association guided hashing for unsupervised cross-modal retrieval
    Kang, Han
    Zhang, Xiaowei
    Han, Wenpeng
    Zhou, Mingliang
    MULTIMEDIA SYSTEMS, 2024, 30 (06)
  • [23] Robust Unsupervised Cross-modal Hashing for Multimedia Retrieval
    Cheng, Miaomiao
    Jing, Liping
    Ng, Michael K.
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2020, 38 (03)
  • [24] Large-Scale Supervised Hashing for Cross-Modal Retreival
    Karbil, Loubna
    Daoudi, Imane
    2017 IEEE/ACS 14TH INTERNATIONAL CONFERENCE ON COMPUTER SYSTEMS AND APPLICATIONS (AICCSA), 2017, : 803 - 808
  • [25] Deep Cross-Modal Hashing
    Jiang, Qing-Yuan
    Li, Wu-Jun
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3270 - 3278
  • [26] Unsupervised Contrastive Cross-Modal Hashing
    Hu, Peng
    Zhu, Hongyuan
    Lin, Jie
    Peng, Dezhong
    Zhao, Yin-Ping
    Peng, Xi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 3877 - 3889
  • [27] Completely Unsupervised Cross-Modal Hashing
    Duan, Jiasheng
    Zhang, Pengfei
    Huang, Zi
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2020), PT I, 2020, 12112 : 178 - 194
  • [28] FDDH: Fast Discriminative Discrete Hashing for Large-Scale Cross-Modal Retrieval
    Liu, Xin
    Wang, Xingzhi
    Yiu-ming Cheung
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (11) : 6306 - 6320
  • [29] Creating Something from Nothing: Unsupervised Knowledge Distillation for Cross-Modal Hashing
    Hu, Hengtong
    Xie, Lingxi
    Hong, Richang
    Tian, Qi
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 3120 - 3129
  • [30] Joint and individual matrix factorization hashing for large-scale cross-modal retrieval
    Wang, Di
    Wang, Quan
    He, Lihuo
    Gao, Xinbo
    Tian, Yumin
    PATTERN RECOGNITION, 2020, 107