Visual emotion analysis using skill-based multi-teacher knowledge distillation

被引:0
|
作者
Cladiere, Tristan [1 ]
Alata, Olivier [1 ]
Ducottet, Christophe [1 ]
Konik, Hubert [1 ]
Legrand, Anne-Claire [1 ]
机构
[1] Univ Jean Monnet St Etienne, Inst Opt Grad Sch, CNRS, Lab Hubert Curien UMR 5516, F-42023 St Etienne, France
关键词
Visual emotion analysis; Knowledge distillation; Multi-teachers; Student training; Convolutional neural network; Deep learning;
D O I
10.1007/s10044-025-01426-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The biggest challenge in visual emotion analysis (VEA) is bridging the affective gap between the features extracted from an image and the emotion it expresses. It is therefore essential to rely on multiple cues to have decent predictions. Recent approaches use deep learning models to extract rich features in an automated manner, through complex frameworks built with multi-branch convolutional neural networks and fusion or attention modules. This paper explores a different approach, by introducing a three-step training scheme and leveraging knowledge distillation (KD), which reconciles effectiveness and simplicity, and thus achieves promising performances despite using a very basic CNN. KD is involved in the first step, where a student model learns to extract the most relevant features on its own, by reproducing those of several teachers specialized in different tasks. The proposed skill-based multi-teacher knowledge distillation (SMKD) loss also ensures that for each instance, the student focuses more or less on the teachers depending on their capacity to obtain a good prediction, i.e. their relevance. The two remaining steps serve respectively to train the student's classifier and to fine-tune the whole model, both for the VEA task. Experiments on two VEA databases demonstrate the gain in performance offered by our approach, where the students consistently outperform their teachers, and also state-of-the-art methods.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Decoupled Multi-teacher Knowledge Distillation based on Entropy
    Cheng, Xin
    Tang, Jialiang
    Zhang, Zhiqiang
    Yu, Wenxin
    Jiang, Ning
    Zhou, Jinjia
    2024 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2024, 2024,
  • [2] Anomaly detection based on multi-teacher knowledge distillation
    Ma, Ye
    Jiang, Xu
    Guan, Nan
    Yi, Wang
    JOURNAL OF SYSTEMS ARCHITECTURE, 2023, 138
  • [3] Correlation Guided Multi-teacher Knowledge Distillation
    Shi, Luyao
    Jiang, Ning
    Tang, Jialiang
    Huang, Xinlei
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT IV, 2024, 14450 : 562 - 574
  • [4] Reinforced Multi-Teacher Selection for Knowledge Distillation
    Yuan, Fei
    Shou, Linjun
    Pei, Jian
    Lin, Wutao
    Gong, Ming
    Fu, Yan
    Jiang, Daxin
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 14284 - 14291
  • [5] Knowledge Distillation via Multi-Teacher Feature Ensemble
    Ye, Xin
    Jiang, Rongxin
    Tian, Xiang
    Zhang, Rui
    Chen, Yaowu
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 566 - 570
  • [6] CONFIDENCE-AWARE MULTI-TEACHER KNOWLEDGE DISTILLATION
    Zhang, Hailin
    Chen, Defang
    Wang, Can
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4498 - 4502
  • [7] Adaptive multi-teacher multi-level knowledge distillation
    Liu, Yuang
    Zhang, Wei
    Wang, Jun
    Neurocomputing, 2021, 415 : 106 - 113
  • [8] Adaptive multi-teacher multi-level knowledge distillation
    Liu, Yuang
    Zhang, Wei
    Wang, Jun
    NEUROCOMPUTING, 2020, 415 : 106 - 113
  • [9] Knowledge Distillation via Multi-Teacher Feature Ensemble
    Ye, Xin
    Jiang, Rongxin
    Tian, Xiang
    Zhang, Rui
    Chen, Yaowu
    IEEE Signal Processing Letters, 2024, 31 : 566 - 570
  • [10] Robust Semantic Segmentation With Multi-Teacher Knowledge Distillation
    Amirkhani, Abdollah
    Khosravian, Amir
    Masih-Tehrani, Masoud
    Kashiani, Hossein
    IEEE ACCESS, 2021, 9 : 119049 - 119066