Contrastive learning based facial action unit detection in children with hearing impairment for a socially assistive robot platform?

被引:0
|
作者
Gurpinar, Cemal [1 ]
Takir, Seyma [1 ]
Bicer, Erhan [1 ]
Uluer, Pinar [2 ]
Arica, Nafiz [3 ,4 ]
Kose, Hatice [1 ]
机构
[1] Istanbul Tech Univ, Istanbul, Turkey
[2] Galatasaray Univ, Istanbul, Turkey
[3] Bahcesehir Univ, Istanbul, Turkey
[4] Piri Reis Univ, Istanbul, Turkey
关键词
Contrastive learning; Facial action unit detection; Child -robot interaction; Transfer learning; Domain adaptation; Covariate shift; EMOTION RECOGNITION; ATTENTION; NETWORK;
D O I
10.1016/j.imavis.2022.104572
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a contrastive learning-based facial action unit detection system for children with hearing im-pairments to be used on a socially assistive humanoid robot platform. The spontaneous facial data of children with hearing impairments was collected during an interaction study with Pepper humanoid robot, and tablet -based game. Since the collected dataset is composed of limited number of instances, a novel domain adaptation extension is applied to improve facial action unit detection performance, using some well-known labelled datasets of adults and children. Furthermore, since facial action unit detection is a multi-label classification prob-lem, a new smoothing parameter, beta, is introduced to adjust the contribution of similar samples to the loss func-tion of the contrastive learning. The results show that the domain adaptation approach using children's data (CAFE) performs better than using adult's data (DISFA). In addition, using the smoothing parameter beta leads to a significant improvement on the recognition performance. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Learning deep representation for action unit detection with auxiliary facial attributes
    Caixia Zhou
    Ruicong Zhi
    International Journal of Machine Learning and Cybernetics, 2022, 13 : 407 - 419
  • [22] Deep Region and Multi-label Learning for Facial Action Unit Detection
    Zhao, Kaili
    Chu, Wen-Sheng
    Zhang, Honggang
    2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 3391 - 3399
  • [23] Joint Patch and Multi-label Learning for Facial Action Unit Detection
    Zhao, Kaili
    Chu, Wen-Sheng
    De la Torre, Fernando
    Cohn, Jeffrey F.
    Zhang, Honggang
    2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 2207 - 2216
  • [24] Deep Learning for Facial Action Unit Detection Under Large Head Poses
    Toser, Zoltan
    Jeni, Laszlo A.
    Lorincz, Andras
    Cohn, Jeffrey F.
    COMPUTER VISION - ECCV 2016 WORKSHOPS, PT III, 2016, 9915 : 359 - 371
  • [25] Action Unit Based Facial Expression Recognition Using Deep Learning
    Al-Darraji, Salah
    Berns, Karsten
    Rodic, Aleksandar
    ADVANCES IN ROBOT DESIGN AND INTELLIGENT CONTROL, 2017, 540 : 413 - 420
  • [26] An efficient approach for facial action unit intensity detection using distance metric learning based on cosine similarity
    Neeru Rathee
    Dinesh Ganotra
    Signal, Image and Video Processing, 2018, 12 : 1141 - 1148
  • [27] An efficient approach for facial action unit intensity detection using distance metric learning based on cosine similarity
    Rathee, Neeru
    Ganotra, Dinesh
    SIGNAL IMAGE AND VIDEO PROCESSING, 2018, 12 (06) : 1141 - 1148
  • [28] Learning Spatial and Temporal Cues for Multi-label Facial Action Unit Detection
    Chu, Wen-Sheng
    De la Torre, Fernando
    Cohn, Jeffrey F.
    2017 12TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2017), 2017, : 25 - 32
  • [29] Cross-Modal Representation Learning for Lightweight and Accurate Facial Action Unit Detection
    Chen, Yingjie
    Wu, Han
    Wang, Tao
    Wang, Yizhou
    Liang, Yun
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04): : 7619 - 7626
  • [30] Heterogeneous spatio-temporal relation learning network for facial action unit detection
    Song, Wenyu
    Shi, Shuze
    Dong, Yu
    An, Gaoyun
    PATTERN RECOGNITION LETTERS, 2022, 164 : 268 - 275