Kids' Emotion Recognition Using Various Deep-Learning Models with Explainable AI

被引:10
|
作者
Rathod, Manish [1 ]
Dalvi, Chirag [1 ]
Kaur, Kulveen [1 ]
Patil, Shruti [2 ]
Gite, Shilpa [2 ]
Kamat, Pooja [1 ]
Kotecha, Ketan [2 ]
Abraham, Ajith [3 ]
Gabralla, Lubna Abdelkareim [4 ]
机构
[1] Deemed Univ, Symbiosis Int Univ, Symbiosis Ctr Appl Artificial Intelligence SCAAI, Pune 412115, Maharashtra, India
[2] Deemed Univ, Symbiosis Int Univ, Symbiosis Inst Technol, Comp Sci & Informat Technol Dept, Pune 412115, Maharashtra, India
[3] Machine Intelligence Res Labs MIR Labs, Auburn, WA 98071 USA
[4] Princess Nourah Bint Abdulrahman Univ, Coll Appl, Dept Comp Sci & Informat Technol, Riyadh 11671, Saudi Arabia
关键词
kids' emotion recognition; FER; explainable artificial intelligence; LIRIS; children emotion dataset; online learning;
D O I
10.3390/s22208066
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Human ideas and sentiments are mirrored in facial expressions. They give the spectator a plethora of social cues, such as the viewer's focus of attention, intention, motivation, and mood, which can help develop better interactive solutions in online platforms. This could be helpful for children while teaching them, which could help in cultivating a better interactive connect between teachers and students, since there is an increasing trend toward the online education platform due to the COVID-19 pandemic. To solve this, the authors proposed kids' emotion recognition based on visual cues in this research with a justified reasoning model of explainable AI. The authors used two datasets to work on this problem; the first is the LIRIS Children Spontaneous Facial Expression Video Database, and the second is an author-created novel dataset of emotions displayed by children aged 7 to 10. The authors identified that the LIRIS dataset has achieved only 75% accuracy, and no study has worked further on this dataset in which the authors have achieved the highest accuracy of 89.31% and, in the authors' dataset, an accuracy of 90.98%. The authors also realized that the face construction of children and adults is different, and the way children show emotions is very different and does not always follow the same way of facial expression for a specific emotion as compared with adults. Hence, the authors used 3D 468 landmark points and created two separate versions of the dataset from the original selected datasets, which are LIRIS-Mesh and Authors-Mesh. In total, all four types of datasets were used, namely LIRIS, the authors' dataset, LIRIS-Mesh, and Authors-Mesh, and a comparative analysis was performed by using seven different CNN models. The authors not only compared all dataset types used on different CNN models but also explained for every type of CNN used on every specific dataset type how test images are perceived by the deep-learning models by using explainable artificial intelligence (XAI), which helps in localizing features contributing to particular emotions. The authors used three methods of XAI, namely Grad-CAM, Grad-CAM++, and SoftGrad, which help users further establish the appropriate reason for emotion detection by knowing the contribution of its features in it.
引用
收藏
页数:21
相关论文
共 50 条
  • [31] Enhancing interpretability and bias control in deep learning models for medical image analysis using generative AI
    Minutti-Martinez, Carlos
    Escalante-Ramirez, Boris
    Olveres, Jimena
    OPTICS, PHOTONICS, AND DIGITAL TECHNOLOGIES FOR IMAGING APPLICATIONS VIII, 2024, 12998
  • [32] Performance Evaluation of Sentiment Analysis on Text and Emoji Data Using End-to-End, Transfer Learning, Distributed and Explainable AI Models
    Velampalli, Sirisha
    Muniyappa, Chandrashekar
    Saxena, Ashutosh
    JOURNAL OF ADVANCES IN INFORMATION TECHNOLOGY, 2022, 13 (02) : 167 - 172
  • [33] Explainable AI for domain experts: a post Hoc analysis of deep learning for defect classification of TFT–LCD panels
    Minyoung Lee
    Joohyoung Jeon
    Hongchul Lee
    Journal of Intelligent Manufacturing, 2022, 33 : 1747 - 1759
  • [34] Explainable AI-driven machine learning for heart disease detection using ECG signal
    Majhi, Babita
    Kashyap, Aarti
    APPLIED SOFT COMPUTING, 2024, 167
  • [35] Explainable Deep Learning Models With Gradient-Weighted Class Activation Mapping for Smart Agriculture
    Quach, Luyl-Da
    Quoc, Khang Nguyen
    Quynh, Anh Nguyen
    Thai-Nghe, Nguyen
    Nguyen, Tri Gia
    IEEE ACCESS, 2023, 11 : 83752 - 83762
  • [36] Explainable artificial intelligence (XAI): How to make image analysis deep learning models transparent
    Song, Haekang
    Kim, Sungho
    2022 22ND INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2022), 2022, : 1595 - 1598
  • [37] Federated Learning of Explainable AI Models in 6G Systems: Towards Secure and Automated Vehicle Networking
    Renda, Alessandro
    Ducange, Pietro
    Marcelloni, Francesco
    Sabella, Dario
    Filippou, Miltiadis C.
    Nardini, Giovanni
    Stea, Giovanni
    Virdis, Antonio
    Micheli, Davide
    Rapone, Damiano
    Baltar, Leonardo Gomes
    INFORMATION, 2022, 13 (08)
  • [38] A systematic review on the use of explainability in deep learning systems for computer aided diagnosis in radiology: Limited use of explainable AI?
    Groen, Arjan M.
    Kraan, Rik
    Amirkhan, Shahira F.
    Daams, Joost G.
    Maas, Mario
    EUROPEAN JOURNAL OF RADIOLOGY, 2022, 157
  • [39] Deep learning model based prediction of vehicle CO2 emissions with eXplainable AI integration for sustainable environment
    Alam, Gazi Mohammad Imdadul
    Tanim, Sharia Arfin
    Sarker, Sumit Kanti
    Watanobe, Yutaka
    Islam, Rashedul
    Mridha, M. F.
    Nur, Kamruddin
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [40] Explainable AI for domain experts: a post Hoc analysis of deep learning for defect classification of TFT-LCD panels
    Lee, Minyoung
    Jeon, Joohyoung
    Lee, Hongchul
    JOURNAL OF INTELLIGENT MANUFACTURING, 2022, 33 (06) : 1747 - 1759