Similarity-Preserving Knowledge Distillation

被引:692
|
作者
Tung, Frederick [1 ,2 ]
Mori, Greg [1 ,2 ]
机构
[1] Simon Fraser Univ, Burnaby, BC, Canada
[2] Borealis AI, Toronto, ON, Canada
关键词
D O I
10.1109/ICCV.2019.00145
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge distillation is a widely applicable technique for training a student neural network under the guidance of a trained teacher network. For example, in neural network compression, a high-capacity teacher is distilled to train a compact student; in privileged learning, a teacher trained with privileged data is distilled to train a student without access to that data. The distillation loss determines how a teacher's knowledge is captured and transferred to the student. In this paper, we propose a new form of knowledge distillation loss that is inspired by the observation that semantically similar inputs tend to elicit similar activation patterns in a trained network. Similarity-preserving knowledge distillation guides the training of a student network such that input pairs that produce similar (dissimilar) activations in the teacher network produce similar (dissimilar) activations in the student network. In contrast to previous distillation methods, the student is not required to mimic the representation space of the teacher, but rather to preserve the pairwise similarities in its own representation space. Experiments on three public datasets demonstrate the potential of our approach.
引用
收藏
页码:1365 / 1374
页数:10
相关论文
共 50 条
  • [1] BERTtoCNN: Similarity-preserving enhanced knowledge distillation for stance detection
    Li, Yang
    Sun, Yuqing
    Zhu, Nana
    PLOS ONE, 2021, 16 (09):
  • [2] Lightweight Depth Completion Network with Local Similarity-Preserving Knowledge Distillation
    Jeong, Yongseop
    Park, Jinsun
    Cho, Donghyeon
    Hwang, Yoonjin
    Choi, Seibum B.
    Kweon, In So
    SENSORS, 2022, 22 (19)
  • [3] Lightweight Deep CNN for Natural Image Matting via Similarity-Preserving Knowledge Distillation
    Yoon, Donggeun
    Park, Jinsun
    Cho, Donghyeon
    IEEE SIGNAL PROCESSING LETTERS, 2020, 27 (27) : 2139 - 2143
  • [4] SPSD: Similarity-preserving self-distillation for video–text retrieval
    Jiachen Wang
    Yan Hua
    Yingyun Yang
    Hongwei Kou
    International Journal of Multimedia Information Retrieval, 2023, 12
  • [5] Multimodal Similarity-Preserving Hashing
    Masci, Jonathan
    Bronstein, Michael M.
    Bronstein, Alexander M.
    Schmidhuber, Juergen
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2014, 36 (04) : 824 - 830
  • [6] Micro-expression Action Unit Detection with Dual-view Attentive Similarity-Preserving Knowledge Distillation
    Li, Yante
    Peng, Wei
    Zhao, Guoying
    2021 16TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2021), 2021,
  • [7] SPSD: Similarity-preserving self-distillation for video-text retrieval
    Wang, Jiachen
    Hua, Yan
    Yang, Yingyun
    Kou, Hongwei
    INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2023, 12 (02)
  • [8] Brain Tumors Classification in MRIs Based on Personalized Federated Distillation Learning With Similarity-Preserving
    Wu, Bo
    Shi, Donghui
    Aguilar, Jose
    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2025, 35 (02)
  • [9] Similarity-Preserving Hashing for Stock Analysis
    Inphadung, Nongmai
    Kamonsantiroj, Suwatchai
    Pipanmaekaporn, Luepol
    PROCEEDINGS OF THE 2019 5TH INTERNATIONAL CONFERENCE ON E-BUSINESS AND APPLICATIONS (ICEBA 2019), 2019, : 94 - 99
  • [10] Similarity-preserving linear maps on B(H)
    Ji, GX
    LINEAR ALGEBRA AND ITS APPLICATIONS, 2003, 360 : 249 - 257