Visual emotion analysis using skill-based multi-teacher knowledge distillation

被引:0
|
作者
Cladiere, Tristan [1 ]
Alata, Olivier [1 ]
Ducottet, Christophe [1 ]
Konik, Hubert [1 ]
Legrand, Anne-Claire [1 ]
机构
[1] Univ Jean Monnet St Etienne, Inst Opt Grad Sch, CNRS, Lab Hubert Curien UMR 5516, F-42023 St Etienne, France
关键词
Visual emotion analysis; Knowledge distillation; Multi-teachers; Student training; Convolutional neural network; Deep learning;
D O I
10.1007/s10044-025-01426-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The biggest challenge in visual emotion analysis (VEA) is bridging the affective gap between the features extracted from an image and the emotion it expresses. It is therefore essential to rely on multiple cues to have decent predictions. Recent approaches use deep learning models to extract rich features in an automated manner, through complex frameworks built with multi-branch convolutional neural networks and fusion or attention modules. This paper explores a different approach, by introducing a three-step training scheme and leveraging knowledge distillation (KD), which reconciles effectiveness and simplicity, and thus achieves promising performances despite using a very basic CNN. KD is involved in the first step, where a student model learns to extract the most relevant features on its own, by reproducing those of several teachers specialized in different tasks. The proposed skill-based multi-teacher knowledge distillation (SMKD) loss also ensures that for each instance, the student focuses more or less on the teachers depending on their capacity to obtain a good prediction, i.e. their relevance. The two remaining steps serve respectively to train the student's classifier and to fine-tune the whole model, both for the VEA task. Experiments on two VEA databases demonstrate the gain in performance offered by our approach, where the students consistently outperform their teachers, and also state-of-the-art methods.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Dissolved oxygen prediction in the Taiwan Strait with the attention-based multi-teacher knowledge distillation model
    Chen, Lei
    Lin, Ye
    Guo, Minquan
    Lu, Wenfang
    Li, Xueding
    Zhang, Zhenchang
    OCEAN & COASTAL MANAGEMENT, 2025, 265
  • [32] Accurate and efficient protein embedding using multi-teacher distillation learning
    Shang, Jiayu
    Peng, Cheng
    Ji, Yongxin
    Guan, Jiaojiao
    Cai, Dehan
    Tang, Xubo
    Sun, Yanni
    BIOINFORMATICS, 2024, 40 (09)
  • [33] Semi-supervised lung adenocarcinoma histopathology image classification based on multi-teacher knowledge distillation
    Wang, Qixuan
    Zhang, Yanjun
    Lu, Jun
    Li, Congsheng
    Zhang, Yungang
    PHYSICS IN MEDICINE AND BIOLOGY, 2024, 69 (18):
  • [34] Bi-Level Orthogonal Multi-Teacher Distillation
    Gong, Shuyue
    Wen, Weigang
    ELECTRONICS, 2024, 13 (16)
  • [35] MULTI-TEACHER KNOWLEDGE DISTILLATION FOR COMPRESSED VIDEO ACTION RECOGNITION ON DEEP NEURAL NETWORKS
    Wu, Meng-Chieh
    Chiu, Ching-Te
    Wu, Kun-Hsuan
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 2202 - 2206
  • [36] A Multi-teacher Knowledge Distillation Framework for Distantly Supervised Relation Extraction with Flexible Temperature
    Fei, Hongxiao
    Tan, Yangying
    Huang, Wenti
    Long, Jun
    Huang, Jincai
    Yang, Liu
    WEB AND BIG DATA, PT II, APWEB-WAIM 2023, 2024, 14332 : 103 - 116
  • [37] Building and road detection from remote sensing images based on weights adaptive multi-teacher collaborative distillation using a fused knowledge
    Chen, Ziyi
    Deng, Liai
    Gou, Jing
    Wang, Cheng
    Li, Jonathan
    Li, Dilong
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2023, 124
  • [38] Adaptive multi-teacher softened relational knowledge distillation framework for payload mismatch in image steganalysis
    Yu, Lifang
    Li, Yunwei
    Weng, Shaowei
    Tian, Huawei
    Liu, Jing
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 95
  • [39] UNIC: Universal Classification Models via Multi-teacher Distillation
    Sariyildiz, Mert Bulent
    Weinzaepfel, Philippe
    Lucas, Thomas
    Larlus, Diane
    Kalantidis, Yannis
    COMPUTER VISION-ECCV 2024, PT IV, 2025, 15062 : 353 - 371
  • [40] Enhanced Accuracy and Robustness via Multi-teacher Adversarial Distillation
    Zhao, Shiji
    Yu, Jie
    Sun, Zhenlong
    Zhang, Bo
    Wei, Xingxing
    COMPUTER VISION - ECCV 2022, PT IV, 2022, 13664 : 585 - 602