Contrastive learning based facial action unit detection in children with hearing impairment for a socially assistive robot platform?

被引:0
|
作者
Gurpinar, Cemal [1 ]
Takir, Seyma [1 ]
Bicer, Erhan [1 ]
Uluer, Pinar [2 ]
Arica, Nafiz [3 ,4 ]
Kose, Hatice [1 ]
机构
[1] Istanbul Tech Univ, Istanbul, Turkey
[2] Galatasaray Univ, Istanbul, Turkey
[3] Bahcesehir Univ, Istanbul, Turkey
[4] Piri Reis Univ, Istanbul, Turkey
关键词
Contrastive learning; Facial action unit detection; Child -robot interaction; Transfer learning; Domain adaptation; Covariate shift; EMOTION RECOGNITION; ATTENTION; NETWORK;
D O I
10.1016/j.imavis.2022.104572
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a contrastive learning-based facial action unit detection system for children with hearing im-pairments to be used on a socially assistive humanoid robot platform. The spontaneous facial data of children with hearing impairments was collected during an interaction study with Pepper humanoid robot, and tablet -based game. Since the collected dataset is composed of limited number of instances, a novel domain adaptation extension is applied to improve facial action unit detection performance, using some well-known labelled datasets of adults and children. Furthermore, since facial action unit detection is a multi-label classification prob-lem, a new smoothing parameter, beta, is introduced to adjust the contribution of similar samples to the loss func-tion of the contrastive learning. The results show that the domain adaptation approach using children's data (CAFE) performs better than using adult's data (DISFA). In addition, using the smoothing parameter beta leads to a significant improvement on the recognition performance. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Emotion-aware Contrastive Learning for Facial Action Unit Detection
    Sun, Xuran
    Zeng, Jiabei
    Shan, Shiguang
    2021 16TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2021), 2021,
  • [2] Contrastive Learning of Person-Independent Representations for Facial Action Unit Detection
    Li, Yong
    Shan, Shiguang
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 3212 - 3225
  • [3] CDRL: Contrastive Disentangled Representation Learning Scheme for Facial Action Unit Detection
    Zhao, Huijuan
    He, Shuangjiang
    Yu, Li
    Du, Congju
    Xiang, Jinqiao
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 652 - 659
  • [4] Facial Expressions Detection of Children with Hearing Impairment
    Bozkurt, Muruvvet
    Oncel, Firat
    Gurpinar, Cemal
    Kose, Hatice
    Unal, Gozde
    2022 30TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU, 2022,
  • [5] Contrastive Feature Learning and Class-Weighted Loss for Facial Action Unit Detection
    Wu, Bing-Fei
    Wei, Yin-Tse
    Wu, Bing-Jhang
    Lin, Chun-Hsien
    2019 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), 2019, : 2478 - 2483
  • [6] Designing a Socially Assistive Robot for Personalized Number Concepts Learning in Preschool Children
    Clabaugh, Caitlyn
    Ragusa, Gisele
    Sha, Fei
    Mataric, Maja
    5TH INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING AND ON EPIGENETIC ROBOTICS (ICDL-EPIROB), 2015, : 314 - 319
  • [7] Robust Face Mask Detection by a Socially Assistive Robot Using Deep Learning
    Zhang, Yuan
    Effati, Meysam
    Tan, Aaron Hao
    Nejat, Goldie
    COMPUTERS, 2024, 13 (01)
  • [8] Semantic Learning for Facial Action Unit Detection
    Wang, Xuehan
    Chen, C. L. Philip
    Yuan, Haozhang
    Zhang, Tong
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2023, 10 (03) : 1372 - 1380
  • [9] Meta Auxiliary Learning for Facial Action Unit Detection
    Li, Yong
    Shan, Shiguang
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (03) : 2526 - 2538
  • [10] Facial Paralysis Symptom Detection Based on Facial Action Unit
    Niu, Hequn
    Liu, Jipeng
    Sun, Xuhui
    Zhao, Xiangtao
    Liu, Yinhua
    IEEE ACCESS, 2024, 12 : 52400 - 52413