A Virtual Knowledge Distillation via Conditional GAN

被引:6
|
作者
Kim, Sihwan [1 ]
机构
[1] Hana Inst Technol, Big Data & AI Lab, Seoul 06133, South Korea
来源
IEEE ACCESS | 2022年 / 10卷
关键词
Training; Generators; Knowledge engineering; Bridges; Generative adversarial networks; Task analysis; Collaborative work; Image classification; model compression; knowledge distillation; self-knowledge distillation; collaborative learning; conditional generative adversarial network;
D O I
10.1109/ACCESS.2022.3163398
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Knowledge distillation aims at transferring the knowledge from a pre-trained complex model, called teacher, to a relatively smaller and faster one, called student. Unlike previous works that transfer the teacher's softened distributions or feature spaces, in this paper, we propose a novel approach, called Virtual Knowledge Distillation (VKD), that transfers a softened distribution generated by a virtual knowledge generator conditioned on class label. A virtual knowledge generator is trained independently, but concurrently with a teacher, to mimic the teacher's softened distributions. Afterwards, when training a student, virtual knowledge generator can be exploited instead of the teacher's softened distributions or combined with the existing distillation methods in a straightforward manner. Moreover, with slight modifications, VKD can be utilized not only for the self-knowledge distillation method but also for the collaborative learning method. We compare our method with several representative distillation methods in various combinations of teacher and student architectures on the image classification tasks. Experimental results on various image classification tasks demonstrate that VKD show a competitive performance compared to the conventional distillation methods, and when combined with them, the performance is improved with a substantial margin.
引用
收藏
页码:34766 / 34778
页数:13
相关论文
共 50 条
  • [1] A Virtual Knowledge Distillation via Conditional GAN
    Kim, Sihwan
    IEEE Access, 2022, 10 : 34766 - 34778
  • [2] Accumulation Knowledge Distillation for Conditional GAN Compression
    Gao, Tingwei
    Long, Rujiao
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 1294 - 1303
  • [3] Nickel and Diming Your GAN: A Dual-Method Approach to Enhancing GAN Efficiency via Knowledge Distillation
    Yeo, Sangyeop
    Jang, Yoojin
    Yoo, Jaejun
    COMPUTER VISION - ECCV 2024, PT LXXXVIII, 2025, 15146 : 104 - 121
  • [4] Conditional generative data-free knowledge distillation
    Yu, Xinyi
    Yan, Ling
    Yang, Yang
    Zhou, Libo
    Ou, Linlin
    IMAGE AND VISION COMPUTING, 2023, 131
  • [5] Instance-Conditional Knowledge Distillation for Object Detection
    Kang, Zijian
    Zhang, Peizhen
    Zhang, Xiangyu
    Sun, Jian
    Zheng, Nanning
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [6] Conditional Response Augmentation for Dialogue using Knowledge Distillation
    Jeong, Myeongho
    Choi, Seungtaek
    Han, Hojae
    Kim, Kyungho
    Hwang, Seung-won
    INTERSPEECH 2020, 2020, : 3890 - 3894
  • [7] Fast and powerful conditional randomization testing via distillation
    Liu, Molei
    Katsevich, Eugene
    Janson, Lucas
    Ramdas, Aaditya
    BIOMETRIKA, 2022, 109 (02) : 277 - 293
  • [8] KDE-GAN: Enhancing Evolutionary GAN With Knowledge Distillation and Transfer Learning
    Liu, Zheping
    Song, Andy
    Sabar, Nasser
    PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2022, 2022, : 268 - 271
  • [9] Knowledge Distillation via Information Matching
    Zhu, Honglin
    Jiang, Ning
    Tang, Jialiang
    Huang, Xinlei
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT IV, 2024, 14450 : 405 - 417
  • [10] Collaborative knowledge distillation via filter knowledge transfer
    Gou, Jianping
    Hu, Yue
    Sun, Liyuan
    Wang, Zhi
    Ma, Hongxing
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238