Visual emotion analysis using skill-based multi-teacher knowledge distillation

被引:0
作者
Cladiere, Tristan [1 ]
Alata, Olivier [1 ]
Ducottet, Christophe [1 ]
Konik, Hubert [1 ]
Legrand, Anne-Claire [1 ]
机构
[1] Univ Jean Monnet St Etienne, Inst Opt Grad Sch, CNRS, Lab Hubert Curien UMR 5516, F-42023 St Etienne, France
关键词
Visual emotion analysis; Knowledge distillation; Multi-teachers; Student training; Convolutional neural network; Deep learning;
D O I
10.1007/s10044-025-01426-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The biggest challenge in visual emotion analysis (VEA) is bridging the affective gap between the features extracted from an image and the emotion it expresses. It is therefore essential to rely on multiple cues to have decent predictions. Recent approaches use deep learning models to extract rich features in an automated manner, through complex frameworks built with multi-branch convolutional neural networks and fusion or attention modules. This paper explores a different approach, by introducing a three-step training scheme and leveraging knowledge distillation (KD), which reconciles effectiveness and simplicity, and thus achieves promising performances despite using a very basic CNN. KD is involved in the first step, where a student model learns to extract the most relevant features on its own, by reproducing those of several teachers specialized in different tasks. The proposed skill-based multi-teacher knowledge distillation (SMKD) loss also ensures that for each instance, the student focuses more or less on the teachers depending on their capacity to obtain a good prediction, i.e. their relevance. The two remaining steps serve respectively to train the student's classifier and to fine-tune the whole model, both for the VEA task. Experiments on two VEA databases demonstrate the gain in performance offered by our approach, where the students consistently outperform their teachers, and also state-of-the-art methods.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] MT4MTL-KD: A Multi-Teacher Knowledge Distillation Framework for Triplet Recognition
    Gui, Shuangchun
    Wang, Zhenkun
    Chen, Jixiang
    Zhou, Xun
    Zhang, Chen
    Cao, Yi
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2024, 43 (04) : 1628 - 1639
  • [32] Multi-Teacher Distillation With Single Model for Neural Machine Translation
    Liang, Xiaobo
    Wu, Lijun
    Li, Juntao
    Qin, Tao
    Zhang, Min
    Liu, Tie-Yan
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 992 - 1002
  • [33] Collaborative Multi-Teacher Distillation for Multi-Task Fault Detection in Power Distribution Grid
    Huang, Bingzheng
    Ni, Chengxin
    Song, Junjie
    Yin, Yifan
    Chen, Ningjiang
    [J]. PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 2638 - 2643
  • [34] Model Compression with Two-stage Multi-teacher Knowledge Distillation for Web Question Answering System
    Yang, Ze
    Shou, Linjun
    Gong, Ming
    Lin, Wutao
    Jiang, Daxin
    [J]. PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM '20), 2020, : 690 - 698
  • [35] Data-Free Low-Bit Quantization via Dynamic Multi-teacher Knowledge Distillation
    Huang, Chong
    Lin, Shaohui
    Zhang, Yan
    Li, Ke
    Zhang, Baochang
    [J]. PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT VIII, 2024, 14432 : 28 - 41
  • [36] Application of a Multi-Teacher Distillation Regression Model Based on Clustering Integration and Adaptive Weighting in Dam Deformation Prediction
    Guo, Fawang
    Yuan, Jiafan
    Li, Danyang
    Qin, Xue
    [J]. Water (Switzerland), 2025, 17 (07)
  • [37] Faster, Lighter, Stronger: Image Rectangling Using Multi-Teacher Instance-Level Distillation
    Mei, Yuan
    Yang, Lichun
    Wang, Mengsi
    Gao, Yidan
    Wu, Kaijun
    [J]. IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (03) : 5441 - 5452
  • [38] Enhancing BERT Performance: Multi-teacher Adversarial Distillation with Clean and Robust Guidance
    Wu, Xunjin
    Chang, Jingfei
    Cheng, Wen
    Wu, Yunxiang
    Li, Yong
    Zeng, Lingfang
    [J]. CONCEPTUAL MODELING, ER 2024, 2025, 15238 : 3 - 17
  • [39] A multi-graph neural group recommendation model with meta-learning and multi-teacher distillation
    Zhou, Weizhen
    Huang, Zhenhua
    Wang, Cheng
    Chen, Yunwen
    [J]. KNOWLEDGE-BASED SYSTEMS, 2023, 276
  • [40] MKD-Cooper: Cooperative 3D Object Detection for Autonomous Driving via Multi-Teacher Knowledge Distillation
    Li, Zhiyuan
    Liang, Huawei
    Wang, Hanqi
    Zhao, Mingzhuo
    Wang, Jian
    Zheng, Xiaokun
    [J]. IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (01): : 1490 - 1500